WO2018179722A1 - Audio system, and sound control device and playback device thereof - Google Patents
Audio system, and sound control device and playback device thereof Download PDFInfo
- Publication number
- WO2018179722A1 WO2018179722A1 PCT/JP2018/001859 JP2018001859W WO2018179722A1 WO 2018179722 A1 WO2018179722 A1 WO 2018179722A1 JP 2018001859 W JP2018001859 W JP 2018001859W WO 2018179722 A1 WO2018179722 A1 WO 2018179722A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- listener
- control
- playback device
- playback
- state
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/02—Synthesis of acoustic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/436—Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
Definitions
- the present invention relates to an acoustic system, an acoustic control device, and a playback device.
- Patent Document 1 in a living room or the like in a living space, the speaker position is limited by the shape of the room or the arrangement of furniture.
- Patent Document 1 is optimally equipped with a plurality of speakers, a plurality of amplifiers for driving the speakers, a microphone for controlling sound, and an acoustic correction device, which are arranged in the environment as described above.
- An audio playback device (hereinafter referred to as a conventional example) that provides a sound environment is disclosed.
- the speaker wiring work becomes complicated. Further, as the number of speakers increases, the number of drive amplifiers also increases and the installation space increases.
- An object of the present disclosure is to provide an acoustic system that can easily perform acoustic adjustment, and an acoustic control device and a reproduction device thereof.
- the sound control device includes a first playback device including a sensor for detecting the state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device.
- An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
- the control means includes Receiving first listener data indicating the state of the first listener from the first playback device; Based on the first listener data, the first playback control content of the first playback device is determined, The first playback device is configured to instruct the first playback control content.
- the acoustic control device includes a first playback device including a sensor for detecting a state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device.
- An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
- the control means includes Receiving first listener data indicating the state of the first listener from the first playback device; Sending the first listener data to the cloud artificial intelligence device; Receiving the first playback control content of the first playback device from the cloud artificial intelligence device; The first playback device is configured to instruct the first playback control content.
- the senor may be a human sensor, and the state of the first listener may be the presence / absence and position of the first listener.
- the senor may be a video camera
- the state of the first listener may be the presence / absence of the first listener, a state and position related to operation.
- the first reproduction control content may be at least one of a volume value of the first reproduction device, a correction value of frequency and phase characteristics, and a delay amount.
- the acoustic system further includes a second playback device that is connected via the network and includes a sensor that detects the state of the second listener.
- the acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
- control means includes Configured to control sound of the second playback device; Receiving second listener data indicating the state of the second listener from the second playback device; Determining the second playback control content of the second playback device based on the second listener data; The second playback device may be configured to instruct the second playback control content.
- the acoustic system includes: A second playback device including a sensor connected via the network and detecting a state of the second listener;
- the acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
- control means includes Configured to control sound of the second playback device; Receiving second listener data indicating the state of the second listener from the second playback device; Sending the second listener data to the cloud artificial intelligence device; Receiving the second playback control content of the second playback device from the cloud artificial intelligence device; The second playback device may be configured to instruct the second playback control content.
- the playback device is A playback device including a sensor for detecting a listener's state, each connected via a network, and an acoustic control device including a control means for controlling sound of the playback device, wherein the acoustic control device is the playback device.
- the playback device is a playback device for an acoustic system configured to play back the received audio data; Based on the instruction signal transmitted from the acoustic control device, the listener data indicating the state of the listener detected by the sensor is transmitted to the acoustic control device, A control signal including the reproduction control content is received from the acoustic control device, and the audio data is reproduced based on the received reproduction control content.
- the senor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
- the senor may be a video camera
- the state of the listener may be the presence / absence of the listener, a state and a position related to operation.
- the playback control content may be at least one of a volume value of the playback apparatus, a correction value of frequency and phase characteristics, and a delay amount.
- the acoustic system according to the fourth aspect is The sound control device and the playback device (the first playback device) are provided.
- the senor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
- the senor is a video camera
- the state of the listener may be the presence / absence of the listener, the state and position related to the operation.
- the playback control content may be at least one of a volume value of the playback device, a correction value of frequency and phase characteristics, and a delay amount.
- FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment.
- 1B is a perspective view of an acoustic system disposed in the room of FIG. 1A.
- 2A is a front view of the playback device 100 of the sound system of FIG. 1B.
- 2B is a side view of the playback device 100 of the sound system of FIG. 1B.
- FIG. 2C is a block diagram illustrating a configuration example of the playback device 100 of the sound system of FIG. 1B.
- FIG. 2D is a side view showing a configuration example for fixing the playback device 100 of the sound system of FIG. 1B.
- FIG. 3 is a block diagram showing a detailed configuration of the playback device 100 of the sound system of FIG. 1B.
- FIG. 4 is a block diagram illustrating a configuration example of the main device 101 of the sound system of FIGS. 1A and 1B.
- FIG. 5 is a timing chart showing a configuration example of a transmission signal between the main device 101 and each playback device 100 of the sound system of FIG. 1B.
- FIG. 6 is a flowchart showing a playback device control process executed by the main device 101 of FIG.
- FIG. 7 is a flowchart showing the initialization process (S1) in the subroutine of FIG.
- FIG. 8 is a flowchart showing the end processing (S6) in the subroutine of FIG.
- S1 initialization process
- S6 end processing
- FIG. 9 is a flowchart showing the playback device individual control process (S4) in the subroutine of FIG.
- FIG. 10 is a flowchart illustrating the viewing environment estimation process (S8), which is the subroutine of FIG.
- FIG. 11 is a flowchart showing the reproduction control content determination process (S9) in the subroutine of FIG.
- FIG. 12 is a table showing an example of a playback control table generated by the main device 101 of FIG.
- FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment.
- FIG. 14 is a flowchart showing the viewing environment estimation process (S8A) by the main device 101 of the acoustic system of FIG. FIG.
- FIG. 15 is a flowchart showing a playback control content determination process (S9A) by the main device 101 of the sound system of FIG.
- FIG. 16 is a table showing an example of a playback control table generated by the main device 101 of FIG.
- FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment.
- FIG. 18 is a block diagram illustrating a configuration example of the cloud artificial intelligence device 20 of FIG.
- FIG. 19 is a flowchart showing viewing environment estimation processing (S8B) by the main device 101 of the acoustic system of FIG.
- FIG. 20 is a flowchart showing the reproduction control content reception process (S9B) by the main device 101 of the acoustic system of FIG.
- S9A playback control content determination process
- FIG. 21 is a table showing an example of a playback control table generated by the main device 101 of FIG.
- FIG. 22 is a table showing an example of the rule table used in the first embodiment.
- FIG. 23 is a table showing an example of the rule table used in the first embodiment.
- FIG. 24 is a table showing an example of a rule table used in the first embodiment.
- FIG. 25 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 26 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 27 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 28 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 29 is a table showing an example of a rule table used in the second and third embodiments.
- FIG. 30 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 31 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 32 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 33 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 34 is a table showing an example of the rule table used in the second and third embodiments.
- FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment.
- a room in which the acoustic system is arranged includes a living room 1 and a dining room 2, for example.
- the living room 1 and the dining room 2 are separated and separated by arranging the shelf 4 and the hallway 3 therebetween.
- a main body device 101 of the sound system, a sofa 5, and a table 6 are arranged in the living room 2.
- a kitchen 7 and a table 8 are arranged.
- FIG. 1B is a perspective view of an acoustic system arranged in the room of FIG. 1A.
- the acoustic system includes a main body device 101 that is an acoustic control device and, for example, five reproduction devices 100A to 100E (hereinafter collectively referred to as reference numeral 100) that are reproduction devices.
- the playback device 100A corresponds to the first playback device of the present disclosure
- the playback device 100E corresponds to the second playback device of the present disclosure.
- the playback device 100A to the playback device 100E and the playback device 100 and the main device 101 are connected via a transmission cable 102.
- the transmission cable 102 is, for example, a known Ethernet cable.
- the Ethernet cable can transmit a transmission signal including audio data and control data using the four core wires and the ground conductor in the Ethernet cable.
- the main device 101 is placed on the floor of the living room 1.
- the playback devices 100A to 100D are provided on the ceiling 9 of the living room 1.
- the playback device 100E is provided on the ceiling 9 of the dining room 2.
- FIG. 2A is a front view of the playback device 100 of the sound system of FIG. 1B.
- the front here is a surface of the playback device 100 viewed from below on the paper surface of FIG. 1B.
- a sensor 201 is provided in the lower left part of the front surface of the playback device 100, and a speaker 200 is provided in the center of the front surface.
- FIG. 2B is a side view of the playback device 100 of the sound system of FIG. 1B.
- a sensor 201 is provided at the bottom of the playback device 100.
- a speaker unit 200 a of the speaker 200 is provided below the playback device 100.
- an amplifier 203 is provided at the center of the playback device 100.
- a signal processing unit 202 and transmission cable connectors 203a, 203b, and 203c are juxtaposed on the reproduction apparatus 100.
- FIG. 2C is a block diagram illustrating a configuration example of the playback device 100 of the sound system of FIG. 1B.
- the playback device 100 includes a speaker 200, a sensor 201, a signal processing unit 202, and an amplifier 203.
- FIG. 2D is a side view showing a configuration example for fixing the playback device 100 of the sound system of FIG. 1B to a building.
- the upper part of the reproduction apparatus 100 is fixed to the building beam 31 by screwing 32.
- the lower part of the playback device 100 is provided in contact with the ceiling 9, and the sensor 201 is provided so as to protrude downward from the ceiling 9.
- FIG. 3 is a block diagram showing a detailed configuration of the playback device 100 of the sound system of FIG. 1B.
- the signal processing unit 202 includes a control unit 300 having an internal memory 300m, transmission signal transmission / reception units 301, 302, and 303, a multiplexer 304, a delay memory 305, a frequency and phase characteristic correction circuit 306, and a variable. And an amplifier 307.
- the control data 308 in the transmission signal is illustrated by a dotted line
- the audio data 309 or the audio signal is illustrated by a one-dot chain line.
- a solid line illustrates a control signal in the signal processing unit 202.
- the transmission cable 102 physically connected to the main device 101 or the upstream reproduction device 100 is connected to the transmission signal transmission / reception unit 301 via the transmission cable connector 203a.
- the transmission signal transmission / reception units 301, 302, and 303 are communication interfaces that transmit / receive control data 308 and audio data 309 to / from other playback devices 100 or the main device 101.
- the transmission signal transmission / reception unit 301 includes control data 308 from the main device 101 or the upstream playback device 100 (that is, a control signal including playback control content, a first control signal including first playback control content, or a second playback signal).
- the second control signal including the control content is received and output to the control unit 300.
- the transmission signal transmission / reception unit 301 transmits the control data 308 to the playback device 100 on the downstream side via the multiplexer 304, the transmission signal transmission / reception units 302 and 303, and the transmission cable connectors 203b and 203c.
- the control data 308 includes acoustic control data such as an activation instruction of the sensor 201, a delay value, a frequency and phase correction value, and a volume value, which will be described later.
- the transmission signal transmission / reception unit 302 receives control data 308 from the downstream reproduction device 100, and receives the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100.
- the transmission signal transmission / reception unit 303 receives the control data 308 from the downstream reproduction device 100, and the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100.
- the multiplexer 304 distributes the control data 308 to one of the transmission signal transmission / reception unit 302 and the transmission signal transmission / reception unit 303 based on the transmission destination data included in the control data 308 and transmits the control data.
- the transmission signal transmission / reception unit 301 receives the audio data 309 from the main device 101 or the upstream playback device 100 and outputs it to the delay memory 305. Also, the transmission signal transmission / reception unit 301 transmits the audio data 309 to the downstream playback device 100 via the transmission signal transmission / reception units 302 and 303 and the transmission cable connectors 203b and 203c.
- the delay memory 305 includes a buffer memory, for example. The delay memory 305 refers to the delay value instructed by the control unit 300, delays the received audio data 309, and outputs the delayed audio data 309 to the frequency and phase characteristic correction circuit 306 (hereinafter referred to as the correction circuit 306).
- the correction circuit 306 corrects the input audio data 309 based on the frequency characteristic correction value and the phase characteristic correction value instructed by the control unit 300, and then outputs them to the variable amplifier 307.
- the variable amplifier 307 changes the signal level of the input audio data 309 according to the volume value specified by the control unit 300.
- the audio data 309 is DA (digital / analog) converted and output to the speaker 200 via the amplifier 203. As a result, the speaker 200 emits sound.
- the control unit 300 controls the operation of the signal processing unit 202 based on the program and data stored in the internal memory 300m (including the acoustic control data at the previous end or the acoustic control data instructed from the main device 101). Control. In particular, the control unit 300 controls the operations of the delay memory 305, the correction circuit 306, and the variable amplifier 307 based on the control data 308 received by the transmission signal transmission / reception unit 301, so that the audio signal output from the speaker 200 is obtained. Control the acoustic characteristics of
- the sensor 201 is a human sensor connected to the control unit 300.
- the sensor 201 detects the presence or absence of a human body (listener) by detecting the temperature of a two-dimensional area of 8 ⁇ 8 pixels, for example.
- the sensor 201 detects the position of the human body when the human body exists.
- data indicating the presence / absence of a human body and the position of the human body are referred to as detection data.
- the detection data corresponds to the listener data (or the first listener data or the second listener data) of the present disclosure.
- the control unit 300 operates the sensor 201 based on an instruction signal in the control data from the main device 101, and transmits detection data from the sensor 201 to the main device 101 as control data for the main device 101.
- FIG. 4 is a block diagram illustrating a configuration example of the main device 101 of the sound system of FIG. 1B.
- the main device 101 includes a control unit 10, a memory 11, a transmission signal transmission / reception unit 12, an operation unit 13, a display unit 14, and a transmission cable connector 15.
- the control unit 10 controls the operations of the processing units 11 to 14, for example, by executing the playback device control process of FIG.
- the memory 11 stores programs (such as playback device control processing in FIG. 6) and data (including playback control table data and audio data described later) necessary for the operation of the control unit 10.
- the transmission signal transmission / reception unit 12 is connected to each playback device 100 via a transmission cable connector 15 and a current detector.
- the transmission signal transmission / reception unit 12 transmits a transmission signal including control data and audio data to each playback device 100 based on the control of the control unit 10.
- the transmission signal transmission / reception unit 12 receives a transmission signal including control data from each playback device 100.
- the operation unit 13 includes at least one of a mouse, a keyboard, a touch panel, a button, a dial, and the like, and receives an input such as an operator's instruction.
- the display unit 14 displays data such as an instruction from the operator, control data of the received transmission signal, and the like.
- FIG. 5 is a timing chart showing a configuration example of a transmission signal between the main device 101 and each playback device 100 of the sound system of FIG. 1B.
- the audio data includes an L-channel audio signal and an R-channel audio signal in a sequence after an 8-bit preamble and 4-bit frequency information. Audio data is output by repeating this sequence configuration.
- the upstream control data is control data transmitted from each playback device 100 to the main device 101.
- the uplink control data includes information (S1_M information, S2_M information, S3_M information, etc Composed of transmission source information and detection data before and after empty data.
- Downlink control data is control data transmitted from the main device 101 to each playback device 100.
- the downlink control data includes information (M_ALL information, M_S1 information, M_S2 information, etc. Composed of transmission destination information and control instruction data before and after empty data.
- the control instruction data includes a sensor activation instruction and acoustic control data.
- FIG. 6 is a flowchart showing a playback device control process executed by the main device 101 of FIG.
- step S1 of FIG. 6 first, the main device 101 executes the initialization process of FIG. 7, and determines in step S2 whether or not the auto mode is set. If YES in step S2, the process proceeds to step S7. On the other hand, if NO in step S2, it is determined that the manual mode is set, and the process proceeds to step S3. In step S ⁇ b> 3, it is determined whether or not an operation is accepted by the operation unit 13. If YES in step S3, the process proceeds to step S4. On the other hand, if NO in step S3, the process proceeds to step S5. In step S4, after executing the playback device individual control process of FIG. 9, the process proceeds to step S5. In step S5, it is determined whether or not power-off is instructed.
- step S6 If not turned off (in the case of No), the process returns to step S2. If it is determined in step S6 that power-off has been instructed (Yes), the process proceeds to step S6.
- step S6 after executing the end process of FIG. 8, the playback device control process ends.
- step S7 the main device 101 instructs the sensor 201 of each playback device to operate. After that, the main device 101 executes viewing environment estimation processing shown in FIG. 10 in step S8. Thereafter, the main device 101 executes a control content determination process shown in FIG. 11 in step S9, and proceeds to step S5.
- FIG. 7 is a flowchart showing the initialization process (S1) in the subroutine of FIG.
- the main device 101 confirms the connection of each playback device 100. That is, the main device 101 checks whether or not each playback device 100 is connected to the main device 101.
- the main device 101 instructs each playback device 100 whose connection has been confirmed to start from the standby state.
- instructing each playback device 100 means transmitting an instruction signal.
- the main device 101 activates each playback device 100 with the sound control data (delay value, volume value, frequency, and phase characteristic correction value) at the previous end stored in the internal memory 300m. Return to the original main routine.
- FIG. 8 is a flowchart showing the end processing (S6) in the subroutine of FIG.
- step S21 of FIG. 8 the main device 101 instructs the end of each playback device 100, and then in step S22, the main device 101 sends the current sound control data (delay value, volume value, frequency and phase characteristic correction) to each playback device 100. Value) is recorded in the internal memory 300m, the standby state is entered, and the process returns to the original main routine.
- the current sound control data delay value, volume value, frequency and phase characteristic correction
- FIG. 9 is a flowchart showing the playback device individual control process (S4) in the subroutine of FIG.
- step S31 it is determined whether or not the operator has performed an operation for changing the volume value using the operation unit 13. If YES in step S31, the process proceeds to step S32. On the other hand, if NO in step S31, the process proceeds to step S33. In step S32, the main device 101 changes the volume value of the corresponding playback device 100, and then proceeds to step S33. Next, in step S ⁇ b> 33, it is determined whether or not the operator has performed an operation for changing the delay value using the operation unit 13. If YES in step S33, the process proceeds to step S34. On the other hand, if NO in step S33, the process proceeds to step S35. In step S34, after changing the delay value of the corresponding playback device 100, the process proceeds to step S35.
- step S35 it is determined whether or not the operator has changed the frequency and phase characteristic correction amount using the operation unit 13. If YES in step S35, the process proceeds to step S36. On the other hand, if NO in step S35, the process returns to the original main routine. In step S36, after changing the frequency and phase characteristic correction amount of the corresponding playback device 100, the process returns to the original main routine.
- FIG. 10 is a flowchart showing the viewing environment estimation process (S8), which is the subroutine of FIG.
- each playback device 100 uses the sensor 201 to determine whether or not there is a person in the corresponding area of each playback device 100. .
- the corresponding area is an area where the sensor 201 can detect the presence / absence and position of a person.
- the corresponding area of each playback device 100 is referred to as the own area of each playback device 100. If YES in step S42, the process proceeds from step S42 to step S43. If NO, the process proceeds from step S42 to step S44.
- each playback device 100 identifies the position of the person from the image of the sensor 201, notifies the main device 101 of the presence and position of the person, and returns to the original main routine.
- each playback device 100 notifies the main device 101 of the absence of a person and returns to the original main routine.
- FIG. 11 is a flowchart showing the reproduction control content determination process (S9) in the subroutine of FIG.
- FIG. 12 is a table showing an example of a playback control table generated by the main device 101 of FIG. 22 to 24 are tables showing examples of rule tables used in the first embodiment.
- the rule table is stored in advance in the memory 11 of the main device 101.
- the rule table includes rules R1 to R28, corresponding to the state estimated from the output content of the sensor 201 (presence / absence of human presence, position), the playback control rule based on that state, and the playback control content for each playback device 100 Is specified. Therefore, the reproduction control table in FIG. 12 is generated with reference to the output contents of the sensor 201 and the rule table. As shown in FIG.
- the playback control table shows the playback control content for each playback device 100 in accordance with the output content of the sensor 201 (presence / absence of human presence, position).
- playback devices A to E indicate playback devices 100A to 100E, respectively.
- “there is a person in playback apparatus A” means that there is a person in the vicinity of playback apparatus 100A and within the detection area of sensor 201, and “the person in playback apparatus A” is in the vicinity of playback apparatus 100A. A person who is in the detection area of the sensor 201.
- step S51 in FIG. 11 first, the main device 101 acquires a reproduction control table (FIG. 12) for determining the content of reproduction control.
- step S52 the reproduction control content matching the estimated state is determined with reference to the acquired reproduction control table.
- step S53 the corresponding playback device 100 is controlled based on the determined playback control content, and the process returns to the original main routine.
- the state of the person estimated from the sensor output includes, for example, “right” side, “left” side, “just below”, “absence”, etc. Indicates how to control the volume value, for example.
- the content of reproduction control of the acoustic system can be determined and controlled according to the state of the person detected by the sensor 201 of each reproduction device 100. Therefore, the acoustic adjustment can be simplified as compared with the prior art. Further, when the playback device 100 is added to the main device 101, the playback device 100 can adjust the sound of the added playback device 100. Therefore, the expansion work of the acoustic system is easy as compared with the prior art. In addition, an optimal acoustic environment can be realized by software regardless of the shape of the room and the arrangement of furniture. Therefore, it is easy to change the installation or expansion of the acoustic system.
- FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment.
- the acoustic system according to the second embodiment is different from the acoustic system according to the first embodiment in the following points.
- the sensor 201 is a moving image camera instead of the human sensor.
- the state estimated from the sensor output includes a state relating to a specific operation of the person (such as “reading”, “meal”). Shall.
- the reproduction control process includes a viewing environment estimation process (S8A) in FIG. 14 instead of the viewing environment estimation process (S8) in FIG. (4)
- the reproduction control process includes a reproduction control content determination process (S9A) in FIG.
- the acoustic system uses the rule tables of FIGS. 25 to 34 instead of the rule tables of FIGS.
- the rule tables of FIGS. 25 to 34 include rules R101 to R141, and the playback control contents of each playback device 100 are determined based not only on the presence and position of the person but also on the state relating to the specific operation of the person. It is determined. That is, the reproduction control table according to the second embodiment is generated based on the state relating to the specific operation of the person from the sensor output with reference to the rule table.
- the rule tables in FIGS. 25 to 34 are also used in the third embodiment to be described later.
- FIG. 14 is a flowchart showing the viewing environment estimation process (S8A) by the main device 101 of the acoustic system of FIG.
- step S41A of FIG. 14 each playback device 100 senses the state of a person in its own area.
- step S42 it is determined whether or not there is a person. If YES in step S42, the process proceeds to step S43A. On the other hand, if NO in step S42, the process proceeds to step S44.
- step S43A a state related to a specific operation of the person is estimated from the image of the sensor 201, notified to the main device 101, and the process returns to the original main routine.
- step S44 the absence of a person is notified to the main device 101, and the process returns to the original main routine.
- FIG. 15 is a flowchart showing the reproduction control content determination process (S9A) by the main device 101 of the sound system of FIG.
- FIG. 16 is a table showing an example of a playback control table generated by the main device 101 of FIG.
- step S51A of FIG. 15 first, the main device 101 acquires a playback control table (FIG. 16) for determining the content of playback control.
- step S52A the main device 101 refers to the acquired playback control table.
- the reproduction control content that matches the estimated specific state is determined.
- step S53A the main device 101 controls each corresponding playback device 100 based on the determined playback control content, and returns to the original main routine.
- the content of the reproduction control of the acoustic system is determined according to the specific state relating to the human action detected by the sensor 201 of each reproduction device 100. Can be controlled. Therefore, the acoustic adjustment can be simplified as compared with the prior art.
- the playback device 100 is added to the main device 101, the sound adjustment of the added playback device 100 can be performed, so that the expansion work of the sound system is easier than the conventional technology.
- an optimal acoustic environment can be realized regardless of the shape of the room and the arrangement of furniture, it is easy to change the installation or expansion of the acoustic system.
- FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment.
- the acoustic system according to Embodiment 3 differs from the acoustic system according to Embodiment 2 in the following points.
- the acoustic system includes a cloud artificial intelligence device 20 connected to the main device 101 via a network.
- the cloud artificial intelligence device 20 determines the reproduction control content of the reproduction control table based on the state relating to the specific operation of the person detected by the sensor 201 which is a video camera, and transmits it to the main device 101.
- the acoustic system uses the rule tables of FIGS.
- FIG. 21 shows an example of the playback control table.
- FIG. 18 is a block diagram showing a configuration example of the cloud artificial intelligence device 20 of FIG.
- the cloud artificial intelligence apparatus 20 includes a control unit 21, an artificial intelligence determination unit 22, a deep learning unit 23, switches SW1 and SW2, and a transmission signal transmission / reception unit 24 (hereinafter referred to as transmission / reception unit 24). It is prepared for.
- the transmission / reception unit 24 is connected to the main device 101 via the transmission cable 102.
- the transmission / reception unit 24 is connected to the input layer of the artificial intelligence determination unit 22 via the switch SW1.
- the output layer of the artificial intelligence determination unit 22 is connected to the transmission / reception unit 24 via the switch SW2.
- the control unit 21 controls the operation of the cloud artificial intelligence device 20. Specifically, the control unit 21 performs reproduction based on the detection result of the sensor 201 of each reproduction device 100 acquired from the main device 101 (that is, information on a state relating to a specific operation of a person who is a listener). The control content is determined and notified to the main device 101.
- the artificial intelligence determination unit 22 is configured by a neural network including, for example, an input layer, a plurality of intermediate layers, and an output layer. The artificial intelligence determination unit 22 receives “a state relating to a specific operation of a person of each playback device 100” as input, and outputs playback control content.
- the switches SW1 and SW2 are switched to the contact a side, and the deep learning unit 23 uses the rule tables of FIGS. 25 to 34 including the rules R101 to R140 to make the artificial intelligence judgment unit. 22 to learn.
- the switches SW1 and SW2 are respectively switched to the contact point b side, and the artificial intelligence determination unit 22 determines the content of the playback control based on the state related to the specific operation of each playback device 100 from the main device 101.
- the data is output and transmitted to the main device 101. That is, while the content of playback control is determined by the main device 101 in the second embodiment, the cloud artificial intelligence apparatus 20 determines and determines the content of playback control in the third embodiment.
- FIG. 19 is a flowchart showing the viewing environment estimation process (S8B) executed by the main device 101 of FIG.
- step S41A in FIG. 19 each playback device 100 senses the state of a person in its own area.
- step S42 the main device 101 determines whether there is a person. If YES in step S42, the process proceeds to step S43B. On the other hand, if NO in step S42, the process proceeds to step S44.
- step S43B the main device 101 estimates a state related to a specific operation of each person of the playback device 100 from the image of the sensor 201, notifies the cloud artificial intelligence device 20 from the main device 101, and returns to the original main routine. Return to. On the other hand, in step S44, the absence of a person is notified to the main device 101, and the process returns to the original main routine.
- FIG. 20 is a flowchart showing a reproduction control content determination process (S9B) by the main device 101 of the sound system of FIG.
- FIG. 21 is a table showing an example of a playback control table generated by the main device 101 of FIG.
- step S52B of FIG. 20 first, the main device 101 receives the playback control table from the cloud artificial intelligence apparatus 20, and acquires the playback control content.
- step S53B the corresponding playback device 100 is controlled based on the acquired playback control content, and the process returns to the original main routine.
- the content of playback control is determined by the cloud artificial intelligence device 20, and therefore the hardware and software configurations of the main device 101 are changed to the second embodiment.
- the configuration can be simplified by reducing the comparison.
- Other functions and effects are the same as those of the second embodiment.
- Embodiment 3 a moving image camera is used as the sensor 201, but the present invention is not limited to this, and a human sensor may be used.
- the volume value is changed as the playback control content according to the state of the person or the specific state thereof, but the present disclosure is not limited to this.
- the correction value of the frequency and phase characteristics or the delay amount may be changed according to the state of the person. That is, at least one of the volume value, the frequency and phase characteristic correction value, and the delay amount may be controlled. For example, control may be performed such as increasing the signal level of a high frequency if “meal”, and flattening frequency characteristics if “reading” or “study”.
- the main device 101 may receive the detection result of the sensor 201 as listener data, and the control unit 10 of the main device 101 may estimate the state of the listener.
- the control unit 300 may estimate the state of the listener from the detection result of the sensor 201, and information on the estimated state may be transmitted to the main device 101 as listener data.
- the cloud artificial intelligence device 20 may receive the detection result of the sensor 201 as listener data via the main device 101, and the control unit 21 of the cloud artificial intelligence device 20 may estimate the state of the listener. Then, the estimated state information may be transmitted to the main device 101 as listener data.
- the acoustic adjustment can be easily performed.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Otolaryngology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Circuit For Audible Band Transducer (AREA)
- Stereophonic System (AREA)
- Reverberation, Karaoke And Other Acoustics (AREA)
Abstract
According to the present invention, an audio system is provided with: a first playback device (100A) which includes a sensor that detects a state of a first listener, and a sound control device (101) which includes a control means that controls the sound of the first playback device (100A), the first playback device and the sound control device each being connected through a network. The control means of the sound control device (101) is configured to: receive, from the first playback device (100A), first listener data which indicates the state of the first listener; determine, on the basis of the first listener data, first playback control content of the first playback device (100A); and issue instructions about the first playback control content to the first playback device.
Description
本発明は、音響システムとその音響制御装置及び再生装置に関する。
The present invention relates to an acoustic system, an acoustic control device, and a playback device.
例えば、特許文献1において、住空間におけるリビングルーム等では、部屋の形状、又は家具の配置により、スピーカ位置が制限される。特許文献1には、上述のような環境に配置され、複数のスピーカと、それらをそれぞれ駆動するための複数のアンプと、音響を制御するためのマイクロホンと、音響補正装置とを備えて、最適な音響環境を提供するオーディオ再生装置(以下、従来例という)が開示されている。
For example, in Patent Document 1, in a living room or the like in a living space, the speaker position is limited by the shape of the room or the arrangement of furniture. Patent Document 1 is optimally equipped with a plurality of speakers, a plurality of amplifiers for driving the speakers, a microphone for controlling sound, and an acoustic correction device, which are arranged in the environment as described above. An audio playback device (hereinafter referred to as a conventional example) that provides a sound environment is disclosed.
住宅内の特定の部屋(例えば、リビングルームなど)だけでなく、住宅全体で最適な音響環境を実現する場合には、従来例では、スピーカの配線工事が煩雑になる。又、スピーカ数が増えることにより、駆動アンプも増加し、設置場所のスペースが増大する。
When realizing an optimal acoustic environment not only in a specific room (for example, a living room) in the house but also in the entire house, in the conventional example, the speaker wiring work becomes complicated. Further, as the number of speakers increases, the number of drive amplifiers also increases and the installation space increases.
本開示の目的は、音響調整が簡単にできる音響システムと、その音響制御装置及び再生装置を提供することにある。
An object of the present disclosure is to provide an acoustic system that can easily perform acoustic adjustment, and an acoustic control device and a reproduction device thereof.
第1の態様に係る音響制御装置は、それぞれネットワークを介して接続された、第1聴取者の状態を検出するセンサを含む第1再生装置と、前記第1再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置が前記第1再生装置にオーディオデータを送信し、前記第1再生装置が受信した前記オーディオデータを再生するように構成された音響システムのための前記音響制御装置である。
The sound control device according to the first aspect includes a first playback device including a sensor for detecting the state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device. An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
前記制御手段は、
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データを受信し、
前記第1聴取者データに基づいて、前記第1再生装置の第1再生制御内容を決定し、
前記第1再生装置に前記第1再生制御内容を指示するように構成されている。 The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Based on the first listener data, the first playback control content of the first playback device is determined,
The first playback device is configured to instruct the first playback control content.
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データを受信し、
前記第1聴取者データに基づいて、前記第1再生装置の第1再生制御内容を決定し、
前記第1再生装置に前記第1再生制御内容を指示するように構成されている。 The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Based on the first listener data, the first playback control content of the first playback device is determined,
The first playback device is configured to instruct the first playback control content.
第2の態様に係る音響制御装置は、それぞれネットワークを介して接続された、第1聴取者の状態を検出するセンサを含む第1再生装置と、前記第1再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置が前記第1再生装置にオーディオデータを送信し、前記第1再生装置が受信した前記オーディオデータを再生するように構成された音響システムのための前記音響制御装置である。
The acoustic control device according to the second aspect includes a first playback device including a sensor for detecting a state of the first listener, each connected via a network, and control means for controlling the sound of the first playback device. An audio control device including: an audio control device configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device. It is the said acoustic control apparatus.
前記制御手段は、
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データを受信し、
前記第1聴取者データをクラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第1再生装置の第1再生制御内容を受信し、
前記第1再生装置に前記第1再生制御内容を指示するように構成されている。 The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Sending the first listener data to the cloud artificial intelligence device;
Receiving the first playback control content of the first playback device from the cloud artificial intelligence device;
The first playback device is configured to instruct the first playback control content.
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データを受信し、
前記第1聴取者データをクラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第1再生装置の第1再生制御内容を受信し、
前記第1再生装置に前記第1再生制御内容を指示するように構成されている。 The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Sending the first listener data to the cloud artificial intelligence device;
Receiving the first playback control content of the first playback device from the cloud artificial intelligence device;
The first playback device is configured to instruct the first playback control content.
また、前記音響制御装置において、センサは人感センサであり、前記第1聴取者の前記状態は、前記第1聴取者の存在の有無及び位置でもよい。
In the acoustic control device, the sensor may be a human sensor, and the state of the first listener may be the presence / absence and position of the first listener.
また、前記音響制御装置において、前記センサは動画カメラであり、前記第1聴取者の前記状態は、前記第1聴取者の存在の有無、動作に関する状態及び位置でもよい。
Also, in the acoustic control device, the sensor may be a video camera, and the state of the first listener may be the presence / absence of the first listener, a state and position related to operation.
また、前記音響制御装置において、前記第1再生制御内容は、前記第1再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つでもよい。
In the acoustic control device, the first reproduction control content may be at least one of a volume value of the first reproduction device, a correction value of frequency and phase characteristics, and a delay amount.
また、前記音響システムは、さらに前記ネットワークを介して接続され、第2聴取者の状態を検出するセンサを含む第2再生装置を備え、
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成されてもよい。 The acoustic system further includes a second playback device that is connected via the network and includes a sensor that detects the state of the second listener.
The acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成されてもよい。 The acoustic system further includes a second playback device that is connected via the network and includes a sensor that detects the state of the second listener.
The acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
そして前記制御手段は、
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データに基づいて、前記第2再生装置の第2再生制御内容を決定し、
前記第2再生装置に前記第2再生制御内容を指示するように構成されてもよい。 And the control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Determining the second playback control content of the second playback device based on the second listener data;
The second playback device may be configured to instruct the second playback control content.
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データに基づいて、前記第2再生装置の第2再生制御内容を決定し、
前記第2再生装置に前記第2再生制御内容を指示するように構成されてもよい。 And the control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Determining the second playback control content of the second playback device based on the second listener data;
The second playback device may be configured to instruct the second playback control content.
また前記音響システムは、
さらに前記ネットワークを介して接続され、第2聴取者の状態を検出するセンサを含む第2再生装置を備え、
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成されてもよい。 The acoustic system includes:
A second playback device including a sensor connected via the network and detecting a state of the second listener;
The acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
さらに前記ネットワークを介して接続され、第2聴取者の状態を検出するセンサを含む第2再生装置を備え、
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成されてもよい。 The acoustic system includes:
A second playback device including a sensor connected via the network and detecting a state of the second listener;
The acoustic control device may be configured to transmit the audio data to the second playback device, and the second playback device plays back the received audio data.
そして前記制御手段は、
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データを前記クラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第2再生装置の第2再生制御内容を受信し、
前記第2再生装置に前記第2再生制御内容を指示するように構成されてもよい。 And the control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Sending the second listener data to the cloud artificial intelligence device;
Receiving the second playback control content of the second playback device from the cloud artificial intelligence device;
The second playback device may be configured to instruct the second playback control content.
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データを前記クラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第2再生装置の第2再生制御内容を受信し、
前記第2再生装置に前記第2再生制御内容を指示するように構成されてもよい。 And the control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Sending the second listener data to the cloud artificial intelligence device;
Receiving the second playback control content of the second playback device from the cloud artificial intelligence device;
The second playback device may be configured to instruct the second playback control content.
第3の態様に係る再生装置は、
それぞれネットワークを介して接続された、聴取者の状態を検出するセンサを含む再生装置と、前記再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置は前記再生装置にオーディオデータを送信して、前記再生装置は受信した前記オーディオデータを再生するように構成された音響システムのための再生装置であって、
前記音響制御装置から送信された指示信号に基づいて、前記センサで検出した前記聴取者の前記状態を示す聴取者データを前記音響制御装置に送信し、
前記音響制御装置から前記再生制御内容を含む制御信号を受信し、受信した再生制御内容に基づいて前記オーディオデータを再生するように構成されている。 The playback device according to the third aspect is
A playback device including a sensor for detecting a listener's state, each connected via a network, and an acoustic control device including a control means for controlling sound of the playback device, wherein the acoustic control device is the playback device. Transmitting audio data to a device, wherein the playback device is a playback device for an acoustic system configured to play back the received audio data;
Based on the instruction signal transmitted from the acoustic control device, the listener data indicating the state of the listener detected by the sensor is transmitted to the acoustic control device,
A control signal including the reproduction control content is received from the acoustic control device, and the audio data is reproduced based on the received reproduction control content.
それぞれネットワークを介して接続された、聴取者の状態を検出するセンサを含む再生装置と、前記再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置は前記再生装置にオーディオデータを送信して、前記再生装置は受信した前記オーディオデータを再生するように構成された音響システムのための再生装置であって、
前記音響制御装置から送信された指示信号に基づいて、前記センサで検出した前記聴取者の前記状態を示す聴取者データを前記音響制御装置に送信し、
前記音響制御装置から前記再生制御内容を含む制御信号を受信し、受信した再生制御内容に基づいて前記オーディオデータを再生するように構成されている。 The playback device according to the third aspect is
A playback device including a sensor for detecting a listener's state, each connected via a network, and an acoustic control device including a control means for controlling sound of the playback device, wherein the acoustic control device is the playback device. Transmitting audio data to a device, wherein the playback device is a playback device for an acoustic system configured to play back the received audio data;
Based on the instruction signal transmitted from the acoustic control device, the listener data indicating the state of the listener detected by the sensor is transmitted to the acoustic control device,
A control signal including the reproduction control content is received from the acoustic control device, and the audio data is reproduced based on the received reproduction control content.
また、前記再生装置において、前記センサは人感センサであり、前記聴取者の前記状態は、前記聴取者の存在の有無及び位置でもよい。
In the reproducing apparatus, the sensor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
また、前記再生装置において、前記センサは動画カメラであり、前記聴取者の前記状態は、前記聴取者の存在の有無、動作に関する状態及び位置でもよい。
Further, in the playback apparatus, the sensor may be a video camera, and the state of the listener may be the presence / absence of the listener, a state and a position related to operation.
また、前記再生装置において、前記再生制御内容は、前記再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つでもよい。
In the playback apparatus, the playback control content may be at least one of a volume value of the playback apparatus, a correction value of frequency and phase characteristics, and a delay amount.
第4の態様に係る音響システムは、
前記音響制御装置と、前記再生装置(前記第1再生装置)とを備えている。 The acoustic system according to the fourth aspect is
The sound control device and the playback device (the first playback device) are provided.
前記音響制御装置と、前記再生装置(前記第1再生装置)とを備えている。 The acoustic system according to the fourth aspect is
The sound control device and the playback device (the first playback device) are provided.
前記音響システムにおいて、前記センサは人感センサであり、前記聴取者の前記状態は、前記聴取者の存在の有無及び位置でもよい。
In the acoustic system, the sensor may be a human sensor, and the state of the listener may be the presence / absence and position of the listener.
また、前記音響システムにおいて、前記センサは動画カメラであり、
前記聴取者の前記状態は、前記聴取者の存在の有無、動作に関する状態及び位置でもよい。 In the acoustic system, the sensor is a video camera,
The state of the listener may be the presence / absence of the listener, the state and position related to the operation.
前記聴取者の前記状態は、前記聴取者の存在の有無、動作に関する状態及び位置でもよい。 In the acoustic system, the sensor is a video camera,
The state of the listener may be the presence / absence of the listener, the state and position related to the operation.
また、前記音響システムにおいて、前記再生制御内容は、前記再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つでもよい。
In the acoustic system, the playback control content may be at least one of a volume value of the playback device, a correction value of frequency and phase characteristics, and a delay amount.
以下、本開示に係る実施形態について図面を参照して説明する。なお、以下の各実施形態において、同様の構成要素については同一の符号を付している。
Hereinafter, embodiments according to the present disclosure will be described with reference to the drawings. In addition, in each following embodiment, the same code | symbol is attached | subjected about the same component.
(実施形態1)
図1Aは、実施形態1に係る、音響システムを配置するルームの構成例を示す正面図である。図1Aにおいて、音響システムを配置するルームは例えば、リビングルーム1と、ダイニングルーム2とを含む。リビングルーム1とダイニングルーム2とは、これらの間に棚4及び廊下3が配置されることにより隔てられ、分割されている。リビングルーム1には、音響システムの本体機器101と、ソファー5と、テーブル6が配置されている。ダイニングルーム2には、キッチン7と、テーブル8とが配置されている。 (Embodiment 1)
FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment. In FIG. 1A, a room in which the acoustic system is arranged includes a living room 1 and a dining room 2, for example. The living room 1 and the dining room 2 are separated and separated by arranging the shelf 4 and the hallway 3 therebetween. In the living room 1, amain body device 101 of the sound system, a sofa 5, and a table 6 are arranged. In the dining room 2, a kitchen 7 and a table 8 are arranged.
図1Aは、実施形態1に係る、音響システムを配置するルームの構成例を示す正面図である。図1Aにおいて、音響システムを配置するルームは例えば、リビングルーム1と、ダイニングルーム2とを含む。リビングルーム1とダイニングルーム2とは、これらの間に棚4及び廊下3が配置されることにより隔てられ、分割されている。リビングルーム1には、音響システムの本体機器101と、ソファー5と、テーブル6が配置されている。ダイニングルーム2には、キッチン7と、テーブル8とが配置されている。 (Embodiment 1)
FIG. 1A is a front view illustrating a configuration example of a room in which an acoustic system is arranged according to the first embodiment. In FIG. 1A, a room in which the acoustic system is arranged includes a living room 1 and a dining room 2, for example. The living room 1 and the dining room 2 are separated and separated by arranging the shelf 4 and the hallway 3 therebetween. In the living room 1, a
図1Bは、図1Aのルームに配置される音響システムの斜視図である。図1Bにおいて、音響システムは、音響制御装置である本体機器101と、再生装置である例えば5個の再生機器100A~100E(以下、総称して符号100を付す)とを備えている。たとえば再生装置100Aは、本開示の第1再生装置に相当し、再生装置100Eは、本開示の第2再生装置に相当する。再生機器100A~再生機器100E間、及び再生機器100と本体機器101との間は、伝送ケーブル102を介して接続されている。ここで、伝送ケーブル102は、例えば公知のイーサケーブルである。イーサケーブルは、オーディオデータ及び制御データを含む伝送信号を、イーサケーブル中の4本の芯線及び接地導体を用いて伝送できる。本体機器101は、例えばリビングルーム1の床上に載置されている。再生機器100A~100Dは、リビングルーム1の天井9に設けられている。再生機器100Eは、ダイニングルーム2の天井9に設けられている。
FIG. 1B is a perspective view of an acoustic system arranged in the room of FIG. 1A. In FIG. 1B, the acoustic system includes a main body device 101 that is an acoustic control device and, for example, five reproduction devices 100A to 100E (hereinafter collectively referred to as reference numeral 100) that are reproduction devices. For example, the playback device 100A corresponds to the first playback device of the present disclosure, and the playback device 100E corresponds to the second playback device of the present disclosure. The playback device 100A to the playback device 100E and the playback device 100 and the main device 101 are connected via a transmission cable 102. Here, the transmission cable 102 is, for example, a known Ethernet cable. The Ethernet cable can transmit a transmission signal including audio data and control data using the four core wires and the ground conductor in the Ethernet cable. For example, the main device 101 is placed on the floor of the living room 1. The playback devices 100A to 100D are provided on the ceiling 9 of the living room 1. The playback device 100E is provided on the ceiling 9 of the dining room 2.
図2Aは、図1Bの音響システムの再生機器100の正面図である。なお、ここでの正面とは、図1Bの紙面上において、下方から再生機器100を見た面である。図2Aに示すように、再生機器100の正面の左下部にセンサ201が設けられ、正面の中央にスピーカ200が設けられている。
FIG. 2A is a front view of the playback device 100 of the sound system of FIG. 1B. In addition, the front here is a surface of the playback device 100 viewed from below on the paper surface of FIG. 1B. As shown in FIG. 2A, a sensor 201 is provided in the lower left part of the front surface of the playback device 100, and a speaker 200 is provided in the center of the front surface.
図2Bは、図1Bの音響システムの再生機器100の側面図である。図2Bに示すように、再生機器100の最下部にセンサ201が設けられている。また、再生機器100の下部にスピーカ200のスピーカユニット200aが設けられている。また、再生機器100の中央部にアンプ203が設けられている。また、再生機器100の上部に信号処理部202及び伝送ケーブルコネクタ203a,203b,203cが並置されている。
FIG. 2B is a side view of the playback device 100 of the sound system of FIG. 1B. As shown in FIG. 2B, a sensor 201 is provided at the bottom of the playback device 100. In addition, a speaker unit 200 a of the speaker 200 is provided below the playback device 100. In addition, an amplifier 203 is provided at the center of the playback device 100. In addition, a signal processing unit 202 and transmission cable connectors 203a, 203b, and 203c are juxtaposed on the reproduction apparatus 100.
図2Cは、図1Bの音響システムの再生機器100の構成例を示すブロック図である。図2Cにおいて、再生機器100はスピーカ200と、センサ201と、信号処理部202と、アンプ203とを備えて構成される。
FIG. 2C is a block diagram illustrating a configuration example of the playback device 100 of the sound system of FIG. 1B. In FIG. 2C, the playback device 100 includes a speaker 200, a sensor 201, a signal processing unit 202, and an amplifier 203.
図2Dは、図1Bの音響システムの再生機器100を、建物に固定するための構成例を示す側面図である。図2Dにおいて、再生機器100の上部は、建物の梁31にねじ止め32されて固定される。一方、再生機器100の下部は、天井9に接して設けられ、センサ201が天井9から下方向に突出するように設けられている。
FIG. 2D is a side view showing a configuration example for fixing the playback device 100 of the sound system of FIG. 1B to a building. In FIG. 2D, the upper part of the reproduction apparatus 100 is fixed to the building beam 31 by screwing 32. On the other hand, the lower part of the playback device 100 is provided in contact with the ceiling 9, and the sensor 201 is provided so as to protrude downward from the ceiling 9.
図3は、図1Bの音響システムの再生機器100の詳細構成を示すブロック図である。図3において、信号処理部202は、内部メモリ300mを有する制御部300と、伝送信号送受信部301,302,303と、マルチプレクサ304と、遅延メモリ305と、周波数及び位相特性補正回路306と、可変増幅器307とを備える。なお、図3において、伝送信号のうちの制御データ308を点線で図示し、オーディオデータ309又はオーディオ信号を一点鎖線で図示する。また、実線は信号処理部202内の制御信号を図示する。
FIG. 3 is a block diagram showing a detailed configuration of the playback device 100 of the sound system of FIG. 1B. In FIG. 3, the signal processing unit 202 includes a control unit 300 having an internal memory 300m, transmission signal transmission / reception units 301, 302, and 303, a multiplexer 304, a delay memory 305, a frequency and phase characteristic correction circuit 306, and a variable. And an amplifier 307. In FIG. 3, the control data 308 in the transmission signal is illustrated by a dotted line, and the audio data 309 or the audio signal is illustrated by a one-dot chain line. A solid line illustrates a control signal in the signal processing unit 202.
本体機器101又は上流側の再生機器100と物理的に接続された伝送ケーブル102は、伝送ケーブルコネクタ203aを介して伝送信号送受信部301に接続される。ここで、伝送信号送受信部301,302,303は、制御データ308及びオーディオデータ309を他の再生機器100又は本体機器101と送受信する通信インターフェースである。
The transmission cable 102 physically connected to the main device 101 or the upstream reproduction device 100 is connected to the transmission signal transmission / reception unit 301 via the transmission cable connector 203a. Here, the transmission signal transmission / reception units 301, 302, and 303 are communication interfaces that transmit / receive control data 308 and audio data 309 to / from other playback devices 100 or the main device 101.
伝送信号送受信部301は、本体機器101又は上流側の再生機器100からの制御データ308(つまり、再生制御内容を含む制御信号、又は第1再生制御内容を含む第1制御信号、又は第2再生制御内容を含む第2制御信号)を受信して、制御部300に出力する。また、伝送信号送受信部301は、マルチプレクサ304、伝送信号送受信部302,303、伝送ケーブルコネクタ203b,203cを介して、制御データ308を下流側の再生機器100に送信する。制御データ308は、例えばセンサ201の起動指示、後述する遅延値、周波数及び位相補正値、ボリューム値等の音響制御データを含む。
The transmission signal transmission / reception unit 301 includes control data 308 from the main device 101 or the upstream playback device 100 (that is, a control signal including playback control content, a first control signal including first playback control content, or a second playback signal). The second control signal including the control content is received and output to the control unit 300. Further, the transmission signal transmission / reception unit 301 transmits the control data 308 to the playback device 100 on the downstream side via the multiplexer 304, the transmission signal transmission / reception units 302 and 303, and the transmission cable connectors 203b and 203c. The control data 308 includes acoustic control data such as an activation instruction of the sensor 201, a delay value, a frequency and phase correction value, and a volume value, which will be described later.
また、伝送信号送受信部302は、下流側の再生機器100からの制御データ308を受信して、マルチプレクサ304、伝送信号送受信部301及び伝送ケーブルコネクタ203aを介して本体機器101又は上流側の再生機器100に送信する。
The transmission signal transmission / reception unit 302 receives control data 308 from the downstream reproduction device 100, and receives the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100.
さらに、伝送信号送受信部303は、下流側の再生機器100からの制御データ308を受信して、マルチプレクサ304、伝送信号送受信部301及び伝送ケーブルコネクタ203aを介して本体機器101又は上流側の再生機器100に送信する。なお、マルチプレクサ304は、制御データ308に含まれる送信先データに基づいて、制御データ308を伝送信号送受信部302及び伝送信号送受信部303のいずれかに振り分けて送信する。
Further, the transmission signal transmission / reception unit 303 receives the control data 308 from the downstream reproduction device 100, and the main device 101 or the upstream reproduction device via the multiplexer 304, the transmission signal transmission / reception unit 301, and the transmission cable connector 203a. To 100. The multiplexer 304 distributes the control data 308 to one of the transmission signal transmission / reception unit 302 and the transmission signal transmission / reception unit 303 based on the transmission destination data included in the control data 308 and transmits the control data.
さらに、伝送信号送受信部301は、本体機器101又は上流側の再生機器100からのオーディオデータ309を受信して、遅延メモリ305に出力する。また、伝送信号送受信部301は、オーディオデータ309を、伝送信号送受信部302,303、伝送ケーブルコネクタ203b,203cを介して下流側の再生機器100に送信する。遅延メモリ305は、例えばバッファメモリを含む。遅延メモリ305は、制御部300により指示された遅延値を参照し、受信したオーディオデータ309を遅延させた後、周波数及び位相特性補正回路306(以下、補正回路306という)に出力する。補正回路306は、制御部300により指示された周波数特性補正値及び位相特性補正値に基づいて、入力されるオーディオデータ309を補正した後、可変増幅器307に出力する。可変増幅器307は、制御部300により指示されるボリューム値に従って、入力されるオーディオデータ309の信号レベルを変化させる。また、オーディオデータ309は、DA(デジタル/アナログ)変換されて、アンプ203を介してスピーカ200に出力される。これによりスピーカ200は、出音する。
Further, the transmission signal transmission / reception unit 301 receives the audio data 309 from the main device 101 or the upstream playback device 100 and outputs it to the delay memory 305. Also, the transmission signal transmission / reception unit 301 transmits the audio data 309 to the downstream playback device 100 via the transmission signal transmission / reception units 302 and 303 and the transmission cable connectors 203b and 203c. The delay memory 305 includes a buffer memory, for example. The delay memory 305 refers to the delay value instructed by the control unit 300, delays the received audio data 309, and outputs the delayed audio data 309 to the frequency and phase characteristic correction circuit 306 (hereinafter referred to as the correction circuit 306). The correction circuit 306 corrects the input audio data 309 based on the frequency characteristic correction value and the phase characteristic correction value instructed by the control unit 300, and then outputs them to the variable amplifier 307. The variable amplifier 307 changes the signal level of the input audio data 309 according to the volume value specified by the control unit 300. The audio data 309 is DA (digital / analog) converted and output to the speaker 200 via the amplifier 203. As a result, the speaker 200 emits sound.
制御部300は、内部メモリ300mに格納されたプログラム及びデータ(前回の終了時の音響制御データ、もしくは本体機器101から指示された音響制御データを含む)に基づいて、信号処理部202の動作を制御する。特に、制御部300は、伝送信号送受信部301で受信された制御データ308に基づいて、遅延メモリ305、補正回路306及び可変増幅器307の動作を制御することで、スピーカ200から出力されるオーディオ信号の音響特性を制御する。
The control unit 300 controls the operation of the signal processing unit 202 based on the program and data stored in the internal memory 300m (including the acoustic control data at the previous end or the acoustic control data instructed from the main device 101). Control. In particular, the control unit 300 controls the operations of the delay memory 305, the correction circuit 306, and the variable amplifier 307 based on the control data 308 received by the transmission signal transmission / reception unit 301, so that the audio signal output from the speaker 200 is obtained. Control the acoustic characteristics of
センサ201は、制御部300に接続された人感センサである。センサ201は、例えば8×8画素の2次元エリアの温度を検出することで、人体(聴取者)の存在の有無を検出する。また、センサ201は、人体が存在するときは、人体の位置を検出する。以下、人体の存在の有無及び人体の位置を示すデータを、検出データという。検出データは、本開示の聴取者データ(又は、第1聴取者データ、又は第2聴取者データ)に相当する。制御部300は、本体機器101からの制御データ内の指示信号に基づいてセンサ201を動作させて、センサ201からの検出データを、本体機器101の制御データとして本体機器101に送信する。
The sensor 201 is a human sensor connected to the control unit 300. The sensor 201 detects the presence or absence of a human body (listener) by detecting the temperature of a two-dimensional area of 8 × 8 pixels, for example. The sensor 201 detects the position of the human body when the human body exists. Hereinafter, data indicating the presence / absence of a human body and the position of the human body are referred to as detection data. The detection data corresponds to the listener data (or the first listener data or the second listener data) of the present disclosure. The control unit 300 operates the sensor 201 based on an instruction signal in the control data from the main device 101, and transmits detection data from the sensor 201 to the main device 101 as control data for the main device 101.
図4は、図1Bの音響システムの本体機器101の構成例を示すブロック図である。図4において、本体機器101は、制御部10と、メモリ11と、伝送信号送受信部12と、操作部13と、表示部14と、伝送ケーブルコネクタ15とを備えている。ここで、制御部10は、例えば図6の再生機器制御処理を実行することで、各処理部11~14の動作を制御する。
FIG. 4 is a block diagram illustrating a configuration example of the main device 101 of the sound system of FIG. 1B. In FIG. 4, the main device 101 includes a control unit 10, a memory 11, a transmission signal transmission / reception unit 12, an operation unit 13, a display unit 14, and a transmission cable connector 15. Here, the control unit 10 controls the operations of the processing units 11 to 14, for example, by executing the playback device control process of FIG.
メモリ11は、制御部10の動作に必要なプログラム(図6の再生機器制御処理等)及びデータ(後述する再生制御テーブルのデータ及びオーディオデータを含む)を格納する。伝送信号送受信部12は、伝送ケーブルコネクタ15及び電流検出器を介して各再生機器100に接続される。伝送信号送受信部12は、制御部10の制御に基づいて、各再生機器100に対して制御データ及びオーディオデータを含む伝送信号を送信する。一方、伝送信号送受信部12は、各再生機器100からの制御データを含む伝送信号を受信する。操作部13は、マウス、キーボード、タッチパネル、ボタン、ダイヤルなどの少なくとも1つを含み、操作者の指示等の入力を受け付ける。表示部14は、操作者の指示等のデータ、及び受信された伝送信号の制御データ等を表示する。
The memory 11 stores programs (such as playback device control processing in FIG. 6) and data (including playback control table data and audio data described later) necessary for the operation of the control unit 10. The transmission signal transmission / reception unit 12 is connected to each playback device 100 via a transmission cable connector 15 and a current detector. The transmission signal transmission / reception unit 12 transmits a transmission signal including control data and audio data to each playback device 100 based on the control of the control unit 10. On the other hand, the transmission signal transmission / reception unit 12 receives a transmission signal including control data from each playback device 100. The operation unit 13 includes at least one of a mouse, a keyboard, a touch panel, a button, a dial, and the like, and receives an input such as an operator's instruction. The display unit 14 displays data such as an instruction from the operator, control data of the received transmission signal, and the like.
図5は、図1Bの音響システムの本体機器101と各再生機器100との間の伝送信号の構成例を示すタイミングチャートである。
FIG. 5 is a timing chart showing a configuration example of a transmission signal between the main device 101 and each playback device 100 of the sound system of FIG. 1B.
図5において、オーディオデータは、8ビットのプリアンブル及び4ビットの周波数情報の後に、Lチャネルのオーディオ信号とRチャネルのオーディオ信号をシーケンスに含む。オーディオデータは、このシーケンス構成を繰り返して出力される。
In FIG. 5, the audio data includes an L-channel audio signal and an R-channel audio signal in a sequence after an 8-bit preamble and 4-bit frequency information. Audio data is output by repeating this sequence configuration.
上り制御データは、各再生機器100から本体機器101へ伝送される制御データである。上り制御データは、空きデータの前後に、送信元情報及び検出データで構成された情報(S1_M情報、S2_M情報、S3_M情報…)を含む。
The upstream control data is control data transmitted from each playback device 100 to the main device 101. The uplink control data includes information (S1_M information, S2_M information, S3_M information,...) Composed of transmission source information and detection data before and after empty data.
下り制御データは、本体機器101から各再生機器100へ伝送される制御データである。下り制御データは、空きデータの前後に、送信先情報及び制御指示データで構成された情報(M_ALL情報、M_S1情報、M_S2情報、…)を含む。ここで、制御指示データは、センサ起動指示及び音響制御データを含む。
Downlink control data is control data transmitted from the main device 101 to each playback device 100. The downlink control data includes information (M_ALL information, M_S1 information, M_S2 information,...) Composed of transmission destination information and control instruction data before and after empty data. Here, the control instruction data includes a sensor activation instruction and acoustic control data.
図6は、図4の本体機器101により実行される再生機器制御処理を示すフローチャートである。
FIG. 6 is a flowchart showing a playback device control process executed by the main device 101 of FIG.
図6のステップS1において、まず、本体機器101は、図7の初期化処理を実行し、ステップS2においてオートモードが設定されているか否かを判断する。ステップS2でYESのときはステップS7に進む。一方、ステップS2でNOのときは、マニュアルモードが設定されていると判断してステップS3に進む。ステップS3では、操作部13で操作を受け付けたか否かが判断される。ステップS3でYESのときは、ステップS4に進む。一方、ステップS3でNOのときは、ステップS5に進む。ステップS4では、図9の再生機器個別制御処理を実行した後、ステップS5に進む。ステップS5では、電源オフが指示されたか否かが判断され、オフされていない場合(Noの場合)はステップS2に戻る。ステップS6で電源オフが指示されたと判断された場合(Yesの場合)はステップS6に進む。ステップS6では、図8の終了処理を実行した後、当該再生機器制御処理を終了する。また、ステップS7では、本体機器101は、各再生機器のセンサ201を動作させるように指示する。その後、本体機器101は、ステップS8で、図10に示す視聴環境推定処理を実行する。その後、本体機器101は、ステップS9で、図11に示す制御内容決定処理を実行し、ステップS5に進む。
In step S1 of FIG. 6, first, the main device 101 executes the initialization process of FIG. 7, and determines in step S2 whether or not the auto mode is set. If YES in step S2, the process proceeds to step S7. On the other hand, if NO in step S2, it is determined that the manual mode is set, and the process proceeds to step S3. In step S <b> 3, it is determined whether or not an operation is accepted by the operation unit 13. If YES in step S3, the process proceeds to step S4. On the other hand, if NO in step S3, the process proceeds to step S5. In step S4, after executing the playback device individual control process of FIG. 9, the process proceeds to step S5. In step S5, it is determined whether or not power-off is instructed. If not turned off (in the case of No), the process returns to step S2. If it is determined in step S6 that power-off has been instructed (Yes), the process proceeds to step S6. In step S6, after executing the end process of FIG. 8, the playback device control process ends. In step S7, the main device 101 instructs the sensor 201 of each playback device to operate. After that, the main device 101 executes viewing environment estimation processing shown in FIG. 10 in step S8. Thereafter, the main device 101 executes a control content determination process shown in FIG. 11 in step S9, and proceeds to step S5.
図7は図6のサブルーチンであって、初期化処理(S1)を示すフローチャートである。
FIG. 7 is a flowchart showing the initialization process (S1) in the subroutine of FIG.
図7のステップS11において、まず、本体機器101は、各再生機器100の接続を確認する。つまり、本体機器101は、各再生機器100が本体機器101に接続されているか否かを確認する。次に、本体機器101は、ステップS12において、接続が確認された各再生機器100に対し、スタンバイ状態から起動するように指示する。なお、以下、各再生機器100に対して指示することは、指示信号を送信することを意味する。次いで、ステップS13において、本体機器101は、各再生機器100を、内部メモリ300mに格納されている前回の終了時の音響制御データ(遅延値、ボリューム値、周波数及び位相特性補正値)で起動させ、元のメインルーチンに戻る。
7, first, the main device 101 confirms the connection of each playback device 100. That is, the main device 101 checks whether or not each playback device 100 is connected to the main device 101. Next, in step S12, the main device 101 instructs each playback device 100 whose connection has been confirmed to start from the standby state. Hereinafter, instructing each playback device 100 means transmitting an instruction signal. Next, in step S13, the main device 101 activates each playback device 100 with the sound control data (delay value, volume value, frequency, and phase characteristic correction value) at the previous end stored in the internal memory 300m. Return to the original main routine.
図8は、図6のサブルーチンであって、終了処理(S6)を示すフローチャートである。
FIG. 8 is a flowchart showing the end processing (S6) in the subroutine of FIG.
図8のステップS21において、本体機器101は、各再生機器100の終了を指示した後、ステップS22において、各再生機器100に、現在の音響制御データ(遅延値、ボリューム値、周波数及び位相特性補正値)を内部メモリ300mに記録させて、スタンバイ状態に移行させ、元のメインルーチンに戻る。
In step S21 of FIG. 8, the main device 101 instructs the end of each playback device 100, and then in step S22, the main device 101 sends the current sound control data (delay value, volume value, frequency and phase characteristic correction) to each playback device 100. Value) is recorded in the internal memory 300m, the standby state is entered, and the process returns to the original main routine.
図9は図6のサブルーチンであって、再生機器個別制御処理(S4)を示すフローチャートである。
FIG. 9 is a flowchart showing the playback device individual control process (S4) in the subroutine of FIG.
図9のステップS31において、まず、操作者が操作部13を用いてボリューム値の変更操作をしたか否かが判断される。ステップS31でYESのときは、ステップS32に進む。一方、ステップS31でNOのときは、ステップS33に進む。ステップS32では、本体機器101は、該当する再生機器100のボリューム値を変更した後、ステップS33に進む。次いで、ステップS33において、操作者が操作部13を用いて遅延値の変更操作をしたか否かが判断される。ステップS33でYESのときは、ステップS34に進む。一方、ステップS33でNOのときは、ステップS35に進む。ステップS34では、該当する再生機器100の遅延値を変更した後、ステップS35に進む。次いで、ステップS35において、操作者が操作部13を用いて周波数及び位相特性補正量の変更操作をしたか否かが判断される。ステップS35でYESのときは、ステップS36に進む。一方、ステップS35でNOのときは、元のメインルーチンに戻る。ステップS36では、該当する再生機器100の周波数及び位相特性補正量を変更した後、元のメインルーチンに戻る。
9, first, it is determined whether or not the operator has performed an operation for changing the volume value using the operation unit 13. If YES in step S31, the process proceeds to step S32. On the other hand, if NO in step S31, the process proceeds to step S33. In step S32, the main device 101 changes the volume value of the corresponding playback device 100, and then proceeds to step S33. Next, in step S <b> 33, it is determined whether or not the operator has performed an operation for changing the delay value using the operation unit 13. If YES in step S33, the process proceeds to step S34. On the other hand, if NO in step S33, the process proceeds to step S35. In step S34, after changing the delay value of the corresponding playback device 100, the process proceeds to step S35. Next, in step S35, it is determined whether or not the operator has changed the frequency and phase characteristic correction amount using the operation unit 13. If YES in step S35, the process proceeds to step S36. On the other hand, if NO in step S35, the process returns to the original main routine. In step S36, after changing the frequency and phase characteristic correction amount of the corresponding playback device 100, the process returns to the original main routine.
図10は図6のサブルーチンであって、視聴環境推定処理(S8)を示すフローチャートである。
FIG. 10 is a flowchart showing the viewing environment estimation process (S8), which is the subroutine of FIG.
図10のステップS41,S42において、まず、本体機器101からの指示を受けて、各再生機器100は、センサ201を用いて、各再生機器100の対応するエリアに人がいるか否かを判断する。対応するエリアとは、センサ201が人の有無、位置を検出できる領域である。以下、各再生機器100の対応するエリアを、各再生機器100の自エリアという。ステップS42でYESのときは、ステップS42からステップS43に進む一方、NOのときは、ステップS42からステップS44に進む。ステップS43では、各再生機器100は、センサ201の画像から人の位置を特定し、人の存在及びその位置を本体機器101に通知して元のメインルーチンに戻る。一方、ステップS44では、各再生機器100は、人の不在を本体機器101に通知し、元のメインルーチンに戻る。
In steps S41 and S42 of FIG. 10, first, upon receiving an instruction from the main device 101, each playback device 100 uses the sensor 201 to determine whether or not there is a person in the corresponding area of each playback device 100. . The corresponding area is an area where the sensor 201 can detect the presence / absence and position of a person. Hereinafter, the corresponding area of each playback device 100 is referred to as the own area of each playback device 100. If YES in step S42, the process proceeds from step S42 to step S43. If NO, the process proceeds from step S42 to step S44. In step S43, each playback device 100 identifies the position of the person from the image of the sensor 201, notifies the main device 101 of the presence and position of the person, and returns to the original main routine. On the other hand, in step S44, each playback device 100 notifies the main device 101 of the absence of a person and returns to the original main routine.
図11は図6のサブルーチンであって、再生制御内容決定処理(S9)を示すフローチャートである。図12は図4の本体機器101により生成される再生制御テーブルの一例を示す表である。図22~図24は実施形態1において用いるルールテーブルの一例を示す表である。ルールテーブルは、本体機器101のメモリ11に予め記憶されている。ルールテーブルはルールR1~R28を含み、センサ201の出力内容(人の存在の有無、位置)から推定された状態に対応して、その状態に基づく再生制御ルール、各再生機器100に対する再生制御内容を規定する。従って、図12の再生制御テーブルは、センサ201の出力内容と、前記ルールテーブルとを参照して生成される。再生制御テーブルは、図12に示すように、センサ201の出力内容(人の存在の有無、位置)に応じた、再生機器100毎の再生制御内容を示す。なお、ルールテーブル及び再生制御テーブルにおいて、再生機器A~Eはそれぞれ再生機器100A~100Eを示す。例えば「再生機器Aに人が居る」とは、再生機器100Aの近傍であってセンサ201の検出エリア内に人が居ることを表し、「再生機器Aの人」とは再生機器100Aの近傍であってセンサ201の検出エリア内に居る人をいう。
FIG. 11 is a flowchart showing the reproduction control content determination process (S9) in the subroutine of FIG. FIG. 12 is a table showing an example of a playback control table generated by the main device 101 of FIG. 22 to 24 are tables showing examples of rule tables used in the first embodiment. The rule table is stored in advance in the memory 11 of the main device 101. The rule table includes rules R1 to R28, corresponding to the state estimated from the output content of the sensor 201 (presence / absence of human presence, position), the playback control rule based on that state, and the playback control content for each playback device 100 Is specified. Therefore, the reproduction control table in FIG. 12 is generated with reference to the output contents of the sensor 201 and the rule table. As shown in FIG. 12, the playback control table shows the playback control content for each playback device 100 in accordance with the output content of the sensor 201 (presence / absence of human presence, position). In the rule table and the playback control table, playback devices A to E indicate playback devices 100A to 100E, respectively. For example, “there is a person in playback apparatus A” means that there is a person in the vicinity of playback apparatus 100A and within the detection area of sensor 201, and “the person in playback apparatus A” is in the vicinity of playback apparatus 100A. A person who is in the detection area of the sensor 201.
図11のステップS51において、まず、本体機器101は、再生制御内容を決定するための再生制御テーブル(図12)を取得する。次に、ステップS52において、取得した再生制御テーブルを参照して、推定した状態にマッチする再生制御内容を決定する。次いで、ステップS53において、決定した再生制御内容に基づいて該当する各再生機器100を制御し、元のメインルーチンに戻る。
In step S51 in FIG. 11, first, the main device 101 acquires a reproduction control table (FIG. 12) for determining the content of reproduction control. Next, in step S52, the reproduction control content matching the estimated state is determined with reference to the acquired reproduction control table. Next, in step S53, the corresponding playback device 100 is controlled based on the determined playback control content, and the process returns to the original main routine.
図12の再生制御テーブルにおいて、センサ出力から推定した人の状態とは、例えば「右」側に居る、「左」側に居る、「真下」に居る、「不在」等であり、再生制御内容は、例えばボリューム値をそれぞれどのように制御するかを示す。
In the reproduction control table of FIG. 12, the state of the person estimated from the sensor output includes, for example, “right” side, “left” side, “just below”, “absence”, etc. Indicates how to control the volume value, for example.
以上のように構成された実施形態1に係る音響システムによれば、各再生機器100のセンサ201で検出した人の状態に応じて、音響システムの再生制御の内容を決定して制御できる。従って、音響調整を従来技術に比較して簡単にできる。また、本体機器101に対して再生機器100を増設する場合、増設した再生機器100の音響調整を再生機器100で行うことができる。したがって、従来技術に比較して音響システムの拡張工事が容易である。また、部屋の形状、家具の配置に因らず、ソフトウェアで最適な音響環境を実現することができる。従って、音響システムの設置又は拡張などの変更が容易である。
According to the acoustic system according to the first embodiment configured as described above, the content of reproduction control of the acoustic system can be determined and controlled according to the state of the person detected by the sensor 201 of each reproduction device 100. Therefore, the acoustic adjustment can be simplified as compared with the prior art. Further, when the playback device 100 is added to the main device 101, the playback device 100 can adjust the sound of the added playback device 100. Therefore, the expansion work of the acoustic system is easy as compared with the prior art. In addition, an optimal acoustic environment can be realized by software regardless of the shape of the room and the arrangement of furniture. Therefore, it is easy to change the installation or expansion of the acoustic system.
(実施形態2)
図13は実施形態2に係る音響システムの構成例を示す斜視図である。図13において、実施形態2に係る音響システムは、実施形態1に係る音響システムに比較して、以下の点が異なる。
(1)センサ201を、人感センサに代えて、動画カメラとする。
(2)前記(1)に伴って、図16に示すように、センサ出力から推定した状態には、人の具体的な動作に関する状態(「読書中」、「食事中」など)が含まれるものとする。
(3)再生制御処理は、図10の視聴環境推定処理(S8)に代えて、図14の視聴環境推定処理(S8A)を備える。
(4)再生制御処理は、図11の再生制御内容決定処理(S9)に代えて、図15の再生制御内容決定処理(S9A)を備える。
(5)音響システムは、図22~図24のルールテーブルに代えて、図25~図34のルールテーブルを用いる。図25~図34のルールテーブルは、ルールR101~R141を含み、人の存在及び位置だけでなく、前記の人の具体的な動作に関する状態にも基づいて、各再生機器100の再生制御内容を定めたものである。つまり、実施形態2に係る再生制御テーブルは、前記ルールテーブルを参照し、かつセンサ出力からの人の具体的な動作に関する状態に基づいて生成される。なお、図25~図34のルールテーブルは、後述する実施形態3においても用いる。 (Embodiment 2)
FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment. In FIG. 13, the acoustic system according to the second embodiment is different from the acoustic system according to the first embodiment in the following points.
(1) Thesensor 201 is a moving image camera instead of the human sensor.
(2) Along with the above (1), as shown in FIG. 16, the state estimated from the sensor output includes a state relating to a specific operation of the person (such as “reading”, “meal”). Shall.
(3) The reproduction control process includes a viewing environment estimation process (S8A) in FIG. 14 instead of the viewing environment estimation process (S8) in FIG.
(4) The reproduction control process includes a reproduction control content determination process (S9A) in FIG. 15 instead of the reproduction control content determination process (S9) in FIG.
(5) The acoustic system uses the rule tables of FIGS. 25 to 34 instead of the rule tables of FIGS. The rule tables of FIGS. 25 to 34 include rules R101 to R141, and the playback control contents of eachplayback device 100 are determined based not only on the presence and position of the person but also on the state relating to the specific operation of the person. It is determined. That is, the reproduction control table according to the second embodiment is generated based on the state relating to the specific operation of the person from the sensor output with reference to the rule table. The rule tables in FIGS. 25 to 34 are also used in the third embodiment to be described later.
図13は実施形態2に係る音響システムの構成例を示す斜視図である。図13において、実施形態2に係る音響システムは、実施形態1に係る音響システムに比較して、以下の点が異なる。
(1)センサ201を、人感センサに代えて、動画カメラとする。
(2)前記(1)に伴って、図16に示すように、センサ出力から推定した状態には、人の具体的な動作に関する状態(「読書中」、「食事中」など)が含まれるものとする。
(3)再生制御処理は、図10の視聴環境推定処理(S8)に代えて、図14の視聴環境推定処理(S8A)を備える。
(4)再生制御処理は、図11の再生制御内容決定処理(S9)に代えて、図15の再生制御内容決定処理(S9A)を備える。
(5)音響システムは、図22~図24のルールテーブルに代えて、図25~図34のルールテーブルを用いる。図25~図34のルールテーブルは、ルールR101~R141を含み、人の存在及び位置だけでなく、前記の人の具体的な動作に関する状態にも基づいて、各再生機器100の再生制御内容を定めたものである。つまり、実施形態2に係る再生制御テーブルは、前記ルールテーブルを参照し、かつセンサ出力からの人の具体的な動作に関する状態に基づいて生成される。なお、図25~図34のルールテーブルは、後述する実施形態3においても用いる。 (Embodiment 2)
FIG. 13 is a perspective view illustrating a configuration example of an acoustic system according to the second embodiment. In FIG. 13, the acoustic system according to the second embodiment is different from the acoustic system according to the first embodiment in the following points.
(1) The
(2) Along with the above (1), as shown in FIG. 16, the state estimated from the sensor output includes a state relating to a specific operation of the person (such as “reading”, “meal”). Shall.
(3) The reproduction control process includes a viewing environment estimation process (S8A) in FIG. 14 instead of the viewing environment estimation process (S8) in FIG.
(4) The reproduction control process includes a reproduction control content determination process (S9A) in FIG. 15 instead of the reproduction control content determination process (S9) in FIG.
(5) The acoustic system uses the rule tables of FIGS. 25 to 34 instead of the rule tables of FIGS. The rule tables of FIGS. 25 to 34 include rules R101 to R141, and the playback control contents of each
以下、前述の相違点について説明する。
Hereinafter, the above-described differences will be described.
図14は、図13の音響システムの本体機器101により視聴環境推定処理(S8A)を示すフローチャートである。
FIG. 14 is a flowchart showing the viewing environment estimation process (S8A) by the main device 101 of the acoustic system of FIG.
図14のステップS41Aにおいて、各再生機器100は、自エリアの人の状態をセンシングする。次いで、ステップS42において、人がいるか否かが判断される。ステップS42でYESのときは、ステップS43Aに進む。一方、ステップS42でNOのときは、ステップS44に進む。ステップS43Aでは、センサ201の画像から人の具体的な動作に関する状態を推定し、本体機器101に通知して元のメインルーチンに戻る。一方、ステップS44では、人の不在を本体機器101へ通知して元のメインルーチンに戻る。
In step S41A of FIG. 14, each playback device 100 senses the state of a person in its own area. Next, in step S42, it is determined whether or not there is a person. If YES in step S42, the process proceeds to step S43A. On the other hand, if NO in step S42, the process proceeds to step S44. In step S43A, a state related to a specific operation of the person is estimated from the image of the sensor 201, notified to the main device 101, and the process returns to the original main routine. On the other hand, in step S44, the absence of a person is notified to the main device 101, and the process returns to the original main routine.
図15は図13の音響システムの本体機器101により再生制御内容決定処理(S9A)を示すフローチャートである。図16は図13の本体機器101により生成される再生制御テーブルの一例を示す表である。
FIG. 15 is a flowchart showing the reproduction control content determination process (S9A) by the main device 101 of the sound system of FIG. FIG. 16 is a table showing an example of a playback control table generated by the main device 101 of FIG.
図15のステップS51Aにおいて、まず、本体機器101は、再生制御内容を決定するための再生制御テーブル(図16)を取得し、ステップS52Aにおいて、本体機器101は、取得した再生制御テーブルを参照して、推定した具体的な状態にマッチする再生制御内容を決定する。次いで、ステップS53Aにおいて、本体機器101は、決定した再生制御内容に基づいて該当する各再生機器100を制御し、元のメインルーチンに戻る。
In step S51A of FIG. 15, first, the main device 101 acquires a playback control table (FIG. 16) for determining the content of playback control. In step S52A, the main device 101 refers to the acquired playback control table. Thus, the reproduction control content that matches the estimated specific state is determined. Next, in step S53A, the main device 101 controls each corresponding playback device 100 based on the determined playback control content, and returns to the original main routine.
以上のように構成された実施形態2に係る音響システムによれば、各再生機器100のセンサ201で検出した、人の動作に関する具体的な状態に応じて、音響システムの再生制御の内容を決定して制御できる。従って、音響調整を従来技術に比較して簡単にできる。また、本体機器101に対して再生機器100を増設する場合、増設した再生機器100の音響調整を行うことができるので、従来技術に比較して音響システムの拡張工事が容易であり。また、部屋の形状、家具の配置に因らず最適な音響環境を実現することができるため、音響システムの設置、又は拡張等の変更が容易である。
According to the acoustic system according to the second embodiment configured as described above, the content of the reproduction control of the acoustic system is determined according to the specific state relating to the human action detected by the sensor 201 of each reproduction device 100. Can be controlled. Therefore, the acoustic adjustment can be simplified as compared with the prior art. In addition, when the playback device 100 is added to the main device 101, the sound adjustment of the added playback device 100 can be performed, so that the expansion work of the sound system is easier than the conventional technology. In addition, since an optimal acoustic environment can be realized regardless of the shape of the room and the arrangement of furniture, it is easy to change the installation or expansion of the acoustic system.
(実施形態3)
図17は実施形態3に係る音響システムの構成例を示す斜視図である。図17において、実施形態3に係る音響システムは、実施形態2に係る音響システムに比較して、以下の点が異なる。
(1)音響システムは、本体機器101にネットワークを介して接続された、クラウド人工知能装置20を備える。クラウド人工知能装置20は、例えば動画カメラであるセンサ201で検出された人の具体的な動作に関する状態に基づいて、再生制御テーブルの再生制御内容を決定して、本体機器101に送信することを特徴とする。
(2)音響システムは、ルールテーブルとして、実施形態2と同様に、ルールR101~R141を含む図25~図34のルールテーブルを用いる。
(3)再生制御処理では、図14の視聴環境推定処理(S8A)に代えて、図19の視聴環境推定処理(S8B)が実行される。
(4)再生制御処理では、図15の再生制御内容決定処理(S9A)に代えて、図20の再生制御内容受信処理(S9B)が実行される。
(5)図21に再生制御テーブルの一例を示す。 (Embodiment 3)
FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment. In FIG. 17, the acoustic system according to Embodiment 3 differs from the acoustic system according to Embodiment 2 in the following points.
(1) The acoustic system includes a cloudartificial intelligence device 20 connected to the main device 101 via a network. For example, the cloud artificial intelligence device 20 determines the reproduction control content of the reproduction control table based on the state relating to the specific operation of the person detected by the sensor 201 which is a video camera, and transmits it to the main device 101. Features.
(2) The acoustic system uses the rule tables of FIGS. 25 to 34 including the rules R101 to R141 as the rule table, as in the second embodiment.
(3) In the reproduction control process, the viewing environment estimation process (S8B) in FIG. 19 is executed instead of the viewing environment estimation process (S8A) in FIG.
(4) In the reproduction control process, a reproduction control content reception process (S9B) in FIG. 20 is executed instead of the reproduction control content determination process (S9A) in FIG.
(5) FIG. 21 shows an example of the playback control table.
図17は実施形態3に係る音響システムの構成例を示す斜視図である。図17において、実施形態3に係る音響システムは、実施形態2に係る音響システムに比較して、以下の点が異なる。
(1)音響システムは、本体機器101にネットワークを介して接続された、クラウド人工知能装置20を備える。クラウド人工知能装置20は、例えば動画カメラであるセンサ201で検出された人の具体的な動作に関する状態に基づいて、再生制御テーブルの再生制御内容を決定して、本体機器101に送信することを特徴とする。
(2)音響システムは、ルールテーブルとして、実施形態2と同様に、ルールR101~R141を含む図25~図34のルールテーブルを用いる。
(3)再生制御処理では、図14の視聴環境推定処理(S8A)に代えて、図19の視聴環境推定処理(S8B)が実行される。
(4)再生制御処理では、図15の再生制御内容決定処理(S9A)に代えて、図20の再生制御内容受信処理(S9B)が実行される。
(5)図21に再生制御テーブルの一例を示す。 (Embodiment 3)
FIG. 17 is a perspective view illustrating a configuration example of an acoustic system according to the third embodiment. In FIG. 17, the acoustic system according to Embodiment 3 differs from the acoustic system according to Embodiment 2 in the following points.
(1) The acoustic system includes a cloud
(2) The acoustic system uses the rule tables of FIGS. 25 to 34 including the rules R101 to R141 as the rule table, as in the second embodiment.
(3) In the reproduction control process, the viewing environment estimation process (S8B) in FIG. 19 is executed instead of the viewing environment estimation process (S8A) in FIG.
(4) In the reproduction control process, a reproduction control content reception process (S9B) in FIG. 20 is executed instead of the reproduction control content determination process (S9A) in FIG.
(5) FIG. 21 shows an example of the playback control table.
以下、前述の相違点について説明する。
Hereinafter, the above-described differences will be described.
図18は図17のクラウド人工知能装置20の構成例を示すブロック図である。
FIG. 18 is a block diagram showing a configuration example of the cloud artificial intelligence device 20 of FIG.
図18において、クラウド人工知能装置20は、制御部21と、人工知能判断部22と、深層学習部23と、スイッチSW1,SW2と、伝送信号送受信部24(以下、送受信部24という)とを備えて構成される。送受信部24は、伝送ケーブル102を介して本体機器101に接続される。送受信部24は、スイッチSW1を介して人工知能判断部22の入力層に接続される。人工知能判断部22の出力層は、スイッチSW2を介して送受信部24に接続される。
In FIG. 18, the cloud artificial intelligence apparatus 20 includes a control unit 21, an artificial intelligence determination unit 22, a deep learning unit 23, switches SW1 and SW2, and a transmission signal transmission / reception unit 24 (hereinafter referred to as transmission / reception unit 24). It is prepared for. The transmission / reception unit 24 is connected to the main device 101 via the transmission cable 102. The transmission / reception unit 24 is connected to the input layer of the artificial intelligence determination unit 22 via the switch SW1. The output layer of the artificial intelligence determination unit 22 is connected to the transmission / reception unit 24 via the switch SW2.
制御部21は、クラウド人工知能装置20の動作を制御する。具体的には、制御部21は、本体機器101から取得した、各再生機器100のセンサ201の検出結果(つまり、聴取者である人の具体的な動作に関する状態の情報)に基づいて、再生制御内容を決定して本体機器101に通知する。人工知能判断部22は、例えば入力層、複数層の中間層、出力層とを備えたニューラルネットワークで構成される。人工知能判断部22は、「各再生機器100の人の具体的な動作に関する状態」を入力とし、再生制御内容を出力とする。人工知能判断部22の動作前には、スイッチSW1,SW2をそれぞれ接点a側に切り替え、深層学習部23は、ルールR101~R140を含む図25~図34のルールテーブルを用いて人工知能判断部22に対して学習させる。学習後、スイッチSW1,SW2をそれぞれ接点b側に切り替え、人工知能判断部22は、本体機器101からの各再生機器100の人の具体的な動作に関する状態に基づいて再生制御内容を決定して出力し、本体機器101に送信する。すなわち、実施形態2では再生制御内容を本体機器101で決定していたが、実施形態3では、クラウド人工知能装置20が再生制御内容を判断して決定することを特徴とする。
The control unit 21 controls the operation of the cloud artificial intelligence device 20. Specifically, the control unit 21 performs reproduction based on the detection result of the sensor 201 of each reproduction device 100 acquired from the main device 101 (that is, information on a state relating to a specific operation of a person who is a listener). The control content is determined and notified to the main device 101. The artificial intelligence determination unit 22 is configured by a neural network including, for example, an input layer, a plurality of intermediate layers, and an output layer. The artificial intelligence determination unit 22 receives “a state relating to a specific operation of a person of each playback device 100” as input, and outputs playback control content. Before the operation of the artificial intelligence judgment unit 22, the switches SW1 and SW2 are switched to the contact a side, and the deep learning unit 23 uses the rule tables of FIGS. 25 to 34 including the rules R101 to R140 to make the artificial intelligence judgment unit. 22 to learn. After learning, the switches SW1 and SW2 are respectively switched to the contact point b side, and the artificial intelligence determination unit 22 determines the content of the playback control based on the state related to the specific operation of each playback device 100 from the main device 101. The data is output and transmitted to the main device 101. That is, while the content of playback control is determined by the main device 101 in the second embodiment, the cloud artificial intelligence apparatus 20 determines and determines the content of playback control in the third embodiment.
図19は図17の本体機器101により実行される視聴環境推定処理(S8B)を示すフローチャートである。
FIG. 19 is a flowchart showing the viewing environment estimation process (S8B) executed by the main device 101 of FIG.
図19のステップS41Aにおいて、各再生機器100が自エリアの人の状態をセンシングする。次いで、ステップS42において、本体機器101は、人がいるか否かを判断する。ステップS42でYESのときはステップS43Bに進む。一方、ステップS42でNOのときは、ステップS44に進む。ステップS43Bでは、本体機器101は、センサ201の画像から、各再生機器100の人の具体的な動作に関する状態を推定し、本体機器101からクラウド人工知能装置20へ通知して、元のメインルーチンに戻る。一方、ステップS44では、人の不在を本体機器101へ通知して元のメインルーチンに戻る。
In step S41A in FIG. 19, each playback device 100 senses the state of a person in its own area. Next, in step S42, the main device 101 determines whether there is a person. If YES in step S42, the process proceeds to step S43B. On the other hand, if NO in step S42, the process proceeds to step S44. In step S43B, the main device 101 estimates a state related to a specific operation of each person of the playback device 100 from the image of the sensor 201, notifies the cloud artificial intelligence device 20 from the main device 101, and returns to the original main routine. Return to. On the other hand, in step S44, the absence of a person is notified to the main device 101, and the process returns to the original main routine.
図20は、図17の音響システムの本体機器101により再生制御内容決定処理(S9B)を示すフローチャートである。図21は図17の本体機器101により生成される再生制御テーブルの一例を示す表である。
FIG. 20 is a flowchart showing a reproduction control content determination process (S9B) by the main device 101 of the sound system of FIG. FIG. 21 is a table showing an example of a playback control table generated by the main device 101 of FIG.
図20のステップS52Bにおいて、まず、本体機器101は、クラウド人工知能装置20から再生制御テーブルを受信して、再生制御内容を取得する。次いで、ステップS53Bにおいて、取得した再生制御内容に基づいて該当する各再生機器100を制御し、元のメインルーチンに戻る。
In step S52B of FIG. 20, first, the main device 101 receives the playback control table from the cloud artificial intelligence apparatus 20, and acquires the playback control content. Next, in step S53B, the corresponding playback device 100 is controlled based on the acquired playback control content, and the process returns to the original main routine.
以上のように構成された実施形態3に係る音響システムによれば、再生制御内容の決定をクラウド人工知能装置20により行っているので、本体機器101のハードウェア及びソフトウェアの構成を実施形態2に比較して減少させてその構成を簡単化できる。その他の作用効果は実施形態2と同様である。
According to the acoustic system according to the third embodiment configured as described above, the content of playback control is determined by the cloud artificial intelligence device 20, and therefore the hardware and software configurations of the main device 101 are changed to the second embodiment. The configuration can be simplified by reducing the comparison. Other functions and effects are the same as those of the second embodiment.
実施形態3において、センサ201として動画カメラを用いているが、本発明はこれに限らず、人感センサを用いてもよい。
In Embodiment 3, a moving image camera is used as the sensor 201, but the present invention is not limited to this, and a human sensor may be used.
(変形例)
以上の実施形態においては、人の状態又はその具体的な状態に応じて、再生制御内容としてボリューム値を変更しているが、本開示はこれに限られない。例えば、人の状態に応じて、周波数及び位相特性の補正値、もしくは、遅延量を変更してもよい。すなわち、ボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つを制御してもよい。例えば「食事中」であれば、高い周波数の信号レベルを増大させ、「読書中」又は「勉強中」であれば、周波数特性をフラットにするなどの制御を行ってもよい。 (Modification)
In the above embodiment, the volume value is changed as the playback control content according to the state of the person or the specific state thereof, but the present disclosure is not limited to this. For example, the correction value of the frequency and phase characteristics or the delay amount may be changed according to the state of the person. That is, at least one of the volume value, the frequency and phase characteristic correction value, and the delay amount may be controlled. For example, control may be performed such as increasing the signal level of a high frequency if “meal”, and flattening frequency characteristics if “reading” or “study”.
以上の実施形態においては、人の状態又はその具体的な状態に応じて、再生制御内容としてボリューム値を変更しているが、本開示はこれに限られない。例えば、人の状態に応じて、周波数及び位相特性の補正値、もしくは、遅延量を変更してもよい。すなわち、ボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つを制御してもよい。例えば「食事中」であれば、高い周波数の信号レベルを増大させ、「読書中」又は「勉強中」であれば、周波数特性をフラットにするなどの制御を行ってもよい。 (Modification)
In the above embodiment, the volume value is changed as the playback control content according to the state of the person or the specific state thereof, but the present disclosure is not limited to this. For example, the correction value of the frequency and phase characteristics or the delay amount may be changed according to the state of the person. That is, at least one of the volume value, the frequency and phase characteristic correction value, and the delay amount may be controlled. For example, control may be performed such as increasing the signal level of a high frequency if “meal”, and flattening frequency characteristics if “reading” or “study”.
また、以上の実施の形態において、本体機器101がセンサ201の検出結果を聴取者データとして受信し、本体機器101の制御部10が聴取者の状態を推定してもよく、各再生機器100の制御部300が、センサ201の検出結果から聴取者の状態を推定し、推定された状態の情報が、聴取者データとして本体機器101へ送信されてもよい。または、クラウド人工知能装置20が、本体機器101を介してセンサ201の検出結果を聴取者データとして受信し、クラウド人工知能装置20の制御部21が聴取者の状態を推定してもよい。そして、推定された状態の情報が、聴取者データとして本体機器101へ送信されてもよい。
Further, in the above embodiment, the main device 101 may receive the detection result of the sensor 201 as listener data, and the control unit 10 of the main device 101 may estimate the state of the listener. The control unit 300 may estimate the state of the listener from the detection result of the sensor 201, and information on the estimated state may be transmitted to the main device 101 as listener data. Alternatively, the cloud artificial intelligence device 20 may receive the detection result of the sensor 201 as listener data via the main device 101, and the control unit 21 of the cloud artificial intelligence device 20 may estimate the state of the listener. Then, the estimated state information may be transmitted to the main device 101 as listener data.
以上詳述したように、本開示に係る音響システム等によれば、音響調整が簡単にできる。
As described in detail above, according to the acoustic system and the like according to the present disclosure, the acoustic adjustment can be easily performed.
1 リビングルーム
2 ダイニングルーム
3 廊下
4 棚
5 ソファー
6 テーブル
7 キッチン
8 テーブル
9 天井
10 制御部
11 メモリ
12 伝送信号送受信部
13 操作部
14 表示部
15 伝送ケーブルコネクタ
20 クラウド人工知能装置
21 制御部
22 人工知能判断部
23 深層学習部
24 伝送信号送受信部
31 梁
32 ねじ止め
100,100A~100E,A~E 再生機器(再生装置、第1再生装置、第2再生装置)
101 本体機器(音響制御装置)
102 伝送ケーブル
201 センサ
202 信号処理部
203 アンプ
203a,203b,203c 伝送ケーブルコネクタ
300 制御部(制御手段)
300m 内部メモリ
301,302,303 伝送信号送受信部
304 マルチプレクサ
305 遅延メモリ
306 周波数及び位相特性補正回路
307 可変増幅器
SW1,SW2 スイッチ DESCRIPTION OF SYMBOLS 1 Living room 2 Dining room 3 Corridor 4Shelf 6 Sofa 6 Table 7 Kitchen 8 Table 9 Ceiling 10 Control part 11 Memory 12 Transmission signal transmission / reception part 13 Operation part 14 Display part 15 Transmission cable connector 20 Cloud artificial intelligence apparatus 21 Control part 22 Artificial Intelligent judgment unit 23 Deep learning unit 24 Transmission signal transmission / reception unit 31 Beam 32 Screwing 100, 100A to 100E, A to E Reproduction equipment (reproduction device, first reproduction device, second reproduction device)
101 Main device (acoustic control device)
102Transmission cable 201 Sensor 202 Signal processing unit 203 Amplifiers 203a, 203b, 203c Transmission cable connector 300 Control unit (control means)
300 m Internal memory 301, 302, 303 Transmission signal transmission / reception unit 304 Multiplexer 305 Delay memory 306 Frequency and phase characteristic correction circuit 307 Variable amplifier SW1, SW2 switch
2 ダイニングルーム
3 廊下
4 棚
5 ソファー
6 テーブル
7 キッチン
8 テーブル
9 天井
10 制御部
11 メモリ
12 伝送信号送受信部
13 操作部
14 表示部
15 伝送ケーブルコネクタ
20 クラウド人工知能装置
21 制御部
22 人工知能判断部
23 深層学習部
24 伝送信号送受信部
31 梁
32 ねじ止め
100,100A~100E,A~E 再生機器(再生装置、第1再生装置、第2再生装置)
101 本体機器(音響制御装置)
102 伝送ケーブル
201 センサ
202 信号処理部
203 アンプ
203a,203b,203c 伝送ケーブルコネクタ
300 制御部(制御手段)
300m 内部メモリ
301,302,303 伝送信号送受信部
304 マルチプレクサ
305 遅延メモリ
306 周波数及び位相特性補正回路
307 可変増幅器
SW1,SW2 スイッチ DESCRIPTION OF SYMBOLS 1 Living room 2 Dining room 3 Corridor 4
101 Main device (acoustic control device)
102
300 m
Claims (15)
- それぞれネットワークを介して接続された、第1聴取者の状態を検出するセンサを含む第1再生装置と、前記第1再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置が前記第1再生装置にオーディオデータを送信し、前記第1再生装置が受信した前記オーディオデータを再生するように構成された音響システムのための前記音響制御装置であって、
前記制御手段は、
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データ受信し、
前記第1聴取者データに基づいて、前記第1再生装置の第1再生制御内容を決定し、
前記第1再生装置に前記第1再生制御内容を指示するように構成された、音響制御装置。 A first playback device including a sensor for detecting a state of the first listener, each connected via a network, and an acoustic control device including a control means for controlling the sound of the first playback device, An acoustic control device for an acoustic system configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device,
The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Based on the first listener data, the first playback control content of the first playback device is determined,
An acoustic control device configured to instruct the first reproduction device about the first reproduction control content. - それぞれネットワークを介して接続された、第1聴取者の状態を検出するセンサを含む第1再生装置と、前記第1再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置が前記第1再生装置にオーディオデータを送信し、前記第1再生装置が受信した前記オーディオデータを再生するように構成された音響システムのための前記音響制御装置であって、
前記制御手段は、
前記第1再生装置から、前記第1聴取者の前記状態を示す第1聴取者データを受信し、
前記第1聴取者データをクラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第1再生装置の第1再生制御内容を受信し、
前記第1再生装置に前記第1再生制御内容を指示するように構成された、音響制御装置。 A first playback device including a sensor for detecting a state of the first listener, each connected via a network, and an acoustic control device including a control means for controlling the sound of the first playback device, An acoustic control device for an acoustic system configured to transmit audio data to the first playback device, and to play back the audio data received by the first playback device,
The control means includes
Receiving first listener data indicating the state of the first listener from the first playback device;
Sending the first listener data to the cloud artificial intelligence device;
Receiving the first playback control content of the first playback device from the cloud artificial intelligence device;
An acoustic control device configured to instruct the first reproduction device about the first reproduction control content. - 前記センサは人感センサであり、
前記第1聴取者の前記状態は、前記第1聴取者の存在の有無及び位置である、請求項1又は2記載の音響制御装置。 The sensor is a human sensor,
The acoustic control apparatus according to claim 1, wherein the state of the first listener is presence / absence and position of the first listener. - 前記センサは動画カメラであり、
前記第1聴取者の前記状態は、前記第1聴取者の存在の有無、動作に関する状態及び位置である、請求項1又は2記載の音響制御装置。 The sensor is a video camera;
The acoustic control apparatus according to claim 1, wherein the state of the first listener is presence / absence of the first listener, a state and a position related to an operation. - 前記第1再生制御内容は、前記第1再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つである、請求項1~4のうちのいずれか1つに記載の音響制御装置。 The content of the first reproduction control is at least one of a volume value of the first reproduction device, a correction value of frequency and phase characteristics, and a delay amount. The acoustic control device according to one.
- 前記音響システムは、さらに前記ネットワークを介して接続され、第2聴取者の状態を検出するセンサを含む第2再生装置を備え、
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成され、
前記制御手段は、
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データに基づいて、前記第2再生装置の第2再生制御内容を決定し、
前記第2再生装置に前記第2再生制御内容を指示するように構成された、請求項1に記載の音響制御装置。 The acoustic system further includes a second playback device that is connected via the network and includes a sensor that detects a state of a second listener,
The acoustic control device is configured to transmit the audio data to the second playback device, and the second playback device is configured to play back the received audio data;
The control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Determining the second playback control content of the second playback device based on the second listener data;
The sound control device according to claim 1, wherein the sound control device is configured to instruct the second reproduction device about the content of the second reproduction control. - 前記音響システムは、
さらに前記ネットワークを介して接続され、第2聴取者の状態を検出するセンサを含む第2再生装置を備え、
前記音響制御装置が、前記第2再生装置に前記オーディオデータを送信し、前記第2再生装置が、受信した前記オーディオデータを再生するように構成され、
前記制御手段は、
前記第2再生装置の音響を制御するように構成され、
前記第2再生装置から、前記第2聴取者の前記状態を示す第2聴取者データを受信し、
前記第2聴取者データを前記クラウド人工知能装置に送信し、
前記クラウド人工知能装置から、前記第2再生装置の第2再生制御内容を受信し、
前記第2再生装置に前記第2再生制御内容を指示するように構成された、請求項2に記載の音響制御装置。 The acoustic system includes:
A second playback device including a sensor connected via the network and detecting a state of the second listener;
The acoustic control device is configured to transmit the audio data to the second playback device, and the second playback device is configured to play back the received audio data;
The control means includes
Configured to control sound of the second playback device;
Receiving second listener data indicating the state of the second listener from the second playback device;
Sending the second listener data to the cloud artificial intelligence device;
Receiving the second playback control content of the second playback device from the cloud artificial intelligence device;
The sound control device according to claim 2, wherein the sound control device is configured to instruct the second reproduction device about the second reproduction control content. - それぞれネットワークを介して接続された、聴取者の状態を検出するセンサを含む再生装置と、前記再生装置の音響を制御する制御手段を含む音響制御装置と、を備え、前記音響制御装置は前記再生装置にオーディオデータを送信して、前記再生装置は受信した前記オーディオデータを再生するように構成された音響システムのための再生装置であって、
前記音響制御装置から送信された指示信号に基づいて、前記センサで検出した前記聴取者の前記状態を示す聴取者データを前記音響制御装置に送信し、
前記音響制御装置から前記再生制御内容を含む制御信号を受信し、受信した再生制御内容に基づいて前記オーディオデータを再生するように構成された、再生装置。 A playback device including a sensor for detecting a listener's state, each connected via a network, and an acoustic control device including a control means for controlling sound of the playback device, wherein the acoustic control device is the playback device. Transmitting audio data to a device, wherein the playback device is a playback device for an acoustic system configured to play back the received audio data;
Based on the instruction signal transmitted from the acoustic control device, the listener data indicating the state of the listener detected by the sensor is transmitted to the acoustic control device,
A playback device configured to receive a control signal including the playback control content from the acoustic control device and to play back the audio data based on the received playback control content. - 前記センサは人感センサであり、
前記聴取者の前記状態は、前記聴取者の存在の有無及び位置である、請求項8記載の再生装置。 The sensor is a human sensor,
The playback apparatus according to claim 8, wherein the state of the listener is presence / absence and position of the listener. - 前記センサは動画カメラであり、
前記聴取者の前記状態は、前記聴取者の存在の有無、動作に関する状態及び位置である、請求項8記載の再生装置。 The sensor is a video camera;
The reproduction apparatus according to claim 8, wherein the state of the listener is presence / absence of the listener, a state and a position related to an operation. - 前記再生制御内容は、前記再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つである、請求項8~10のうちのいずれか1つに記載の再生装置。 The reproduction control content according to any one of claims 8 to 10, wherein the reproduction control content is at least one of a volume value of the reproduction device, a correction value of frequency and phase characteristics, and a delay amount. Playback device.
- 請求項1又は2記載の音響制御装置と、
請求項8記載の再生装置とを備えた、音響システム。 The acoustic control device according to claim 1 or 2,
An acoustic system comprising the playback device according to claim 8. - 前記センサは人感センサであり、
前記聴取者の前記状態は、前記聴取者の存在の有無及び位置である、請求項12記載の音響システム。 The sensor is a human sensor,
The acoustic system according to claim 12, wherein the state of the listener is presence / absence and position of the listener. - 前記センサは動画カメラであり、
前記聴取者の前記状態は、前記聴取者の存在の有無、動作に関する状態及び位置である、請求項12記載の音響システム。 The sensor is a video camera;
The acoustic system according to claim 12, wherein the state of the listener is the presence / absence of the listener, a state and a position related to operation. - 前記再生制御内容は、前記再生装置のボリューム値と、周波数及び位相特性の補正値と、遅延量とのうちの少なくとも1つである、請求項12~14うちのいずれか1つに記載の音響システム。 The sound according to any one of claims 12 to 14, wherein the reproduction control content is at least one of a volume value of the reproduction device, a correction value of frequency and phase characteristics, and a delay amount. system.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017-061736 | 2017-03-27 | ||
JP2017061736A JP2020095064A (en) | 2017-03-27 | 2017-03-27 | Sound system, and sound controller and reproducing device thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018179722A1 true WO2018179722A1 (en) | 2018-10-04 |
Family
ID=63674992
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2018/001859 WO2018179722A1 (en) | 2017-03-27 | 2018-01-23 | Audio system, and sound control device and playback device thereof |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2020095064A (en) |
WO (1) | WO2018179722A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014093204A (en) * | 2012-11-05 | 2014-05-19 | Sony Corp | Acoustic illumination device and program |
JP2016005208A (en) * | 2014-06-18 | 2016-01-12 | 東芝ライテック株式会社 | Sound transmission system and illumination system |
JP2016051464A (en) * | 2014-08-29 | 2016-04-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | Presentation information control method and presentation information control device |
JP2017046042A (en) * | 2015-08-24 | 2017-03-02 | シャープ株式会社 | Acoustic system, acoustic apparatus and terminal device |
-
2017
- 2017-03-27 JP JP2017061736A patent/JP2020095064A/en active Pending
-
2018
- 2018-01-23 WO PCT/JP2018/001859 patent/WO2018179722A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014093204A (en) * | 2012-11-05 | 2014-05-19 | Sony Corp | Acoustic illumination device and program |
JP2016005208A (en) * | 2014-06-18 | 2016-01-12 | 東芝ライテック株式会社 | Sound transmission system and illumination system |
JP2016051464A (en) * | 2014-08-29 | 2016-04-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | Presentation information control method and presentation information control device |
JP2017046042A (en) * | 2015-08-24 | 2017-03-02 | シャープ株式会社 | Acoustic system, acoustic apparatus and terminal device |
Also Published As
Publication number | Publication date |
---|---|
JP2020095064A (en) | 2020-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10206030B2 (en) | Microphone array system and microphone array control method | |
JP6668661B2 (en) | Parameter control device and parameter control program | |
EP2011366B1 (en) | Sound pickup device and voice conference apparatus | |
JP3915804B2 (en) | Audio playback device | |
US20180352193A1 (en) | Information processing apparatus, information processing method, and program | |
JP4830644B2 (en) | Control device, synchronization correction method, and synchronization correction program | |
EP3226579B1 (en) | Information-processing device, information-processing system, control method, and program | |
US20170156012A1 (en) | System, audio output device, and method for automatically adjusting firing direction of upward firing speaker | |
EP2816823A1 (en) | Audio system and audio apparatus and channel mapping method thereof | |
EP3179744B1 (en) | Method, device and system for controlling a sound image in an audio zone | |
KR102516997B1 (en) | Audio output control device, audio output control method, and program | |
US10292000B1 (en) | Frequency sweep for a unique portable speaker listening experience | |
US10616684B2 (en) | Environmental sensing for a unique portable speaker listening experience | |
US10861465B1 (en) | Automatic determination of speaker locations | |
WO2016080536A1 (en) | Content playback device, content playback method, and content playback program | |
JP2010252134A (en) | Loudspeaker system | |
CN106126182B (en) | Data output method and electronic equipment | |
JP2012049663A (en) | Ceiling speaker system | |
WO2018179722A1 (en) | Audio system, and sound control device and playback device thereof | |
JP2006196940A (en) | Sound image localization control apparatus | |
US10979803B2 (en) | Communication apparatus, communication method, program, and telepresence system | |
US11050579B2 (en) | Distribution destination specifying device and distribution destination specifying method | |
JP2007329753A (en) | Voice communication device and voice communication device | |
US20070041598A1 (en) | System for location-sensitive reproduction of audio signals | |
US20180088898A1 (en) | Content Reproduction Device, Content Reproduction System, and Control Method for A Content Reproduction Device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18776450 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18776450 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |