WO2018173112A1 - Sound output control device, sound output control system, and sound output control method - Google Patents

Sound output control device, sound output control system, and sound output control method Download PDF

Info

Publication number
WO2018173112A1
WO2018173112A1 PCT/JP2017/011171 JP2017011171W WO2018173112A1 WO 2018173112 A1 WO2018173112 A1 WO 2018173112A1 JP 2017011171 W JP2017011171 W JP 2017011171W WO 2018173112 A1 WO2018173112 A1 WO 2018173112A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound output
occupant
vehicle
sound
atmosphere
Prior art date
Application number
PCT/JP2017/011171
Other languages
French (fr)
Japanese (ja)
Inventor
茜 木村
寛祥 佐藤
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2017/011171 priority Critical patent/WO2018173112A1/en
Publication of WO2018173112A1 publication Critical patent/WO2018173112A1/en

Links

Images

Definitions

  • the present invention relates to a sound output control device, a sound output control system, and a sound output control method for controlling sound output to each of a plurality of occupants of a vehicle.
  • Patent Document 1 since the overall atmosphere in the vehicle is estimated, there is a problem that the sound output cannot be controlled with a sound volume suitable for each occupant.
  • An example is given in which the driver and the passenger in the rear seat are talking through the in-car call using the vehicle-mounted microphone and the vehicle-mounted speaker, but the passenger in the passenger seat is sleeping.
  • the vehicular information providing apparatus described in Patent Document 1 detects the laughing voice generated by the driver even when the passenger in the passenger seat is sleeping, and the conversation between the two is excited and the atmosphere in the vehicle is active. Presume that there is.
  • the in-vehicle speakers are controlled to assist the conversation between the two based on the estimation result, the volume of the in-vehicle speaker on the driver's side and the volume of the in-vehicle speaker on the rear seat side are made easier to hear both utterances. Are raised together. For this reason, there is a possibility that the passenger in the passenger seat may be disturbed by the speech of both.
  • This invention solves the said subject, and aims at obtaining the sound output control apparatus, sound output control system, and sound output control method which can output a sound with the sound volume suitable for every passenger
  • the sound output control device is configured to estimate an atmosphere in each of a plurality of occupants of a vehicle, and the sound output for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit.
  • FIG. 2A is a block diagram showing a hardware configuration for realizing the function of the sound output control system according to Embodiment 1.
  • FIG. 2B is a block diagram illustrating a hardware configuration for executing software that implements the functions of the sound output control system according to Embodiment 1.
  • 3 is a flowchart illustrating a sound output control method according to the first embodiment.
  • 1 is a block diagram showing a configuration of a sound output control device according to Embodiment 1.
  • FIG. 3 is a flowchart illustrating an operation of an estimation unit according to Embodiment 1.
  • FIG. It is a figure which shows the example of arrangement
  • FIG. It is a figure which shows the other example of arrangement
  • FIG. It is a flowchart which shows the telephone call process by a portable terminal.
  • 4 is a flowchart showing a sound output control process of the mobile terminal by the sound output control device according to the first embodiment.
  • 10 is a flowchart illustrating an operation of an estimation unit according to the second embodiment. It is a block diagram which shows the structure of the sound output control apparatus which concerns on Embodiment 3 of this invention.
  • 10 is a flowchart illustrating an operation of a control unit in the third embodiment. It is a figure which shows the example of arrangement
  • FIG. 1 is a block diagram showing the configuration of a sound output control system 1 according to Embodiment 1 of the present invention.
  • the sound output control system 1 is a system that controls sound output for each occupant of a vehicle, and controls sound output in in-car calls or media playback performed between occupants.
  • the sound output control device 2 is a device that controls the sound output of the sound output devices 3a to 3d.
  • Each of the sound input devices 4a to 4d, the portable terminal 5, the vehicle information acquisition unit 6, the storage unit 7, and the photographing device 8 is provided. You can get information from.
  • the sound output control apparatus 2 is provided with the arithmetic processing part 9, the estimation part 10 contained in the arithmetic processing part 9, and the control part 11 as a function structure.
  • the sound output devices 3a to 3d are, for example, in-vehicle speakers arranged for each seat in the vehicle, and convert the sound signal, which is an electrical signal, into sound and output it.
  • the sound signal of the output sound is converted into an analog signal by the signal converter 12a, amplified by the signal amplifier 13a, and then output from each of the sound output devices 3a to 3d.
  • the sound input devices 4a to 4d are, for example, in-vehicle microphones arranged for each seat in the vehicle, and convert the input sound into a sound signal.
  • the sound signals converted by the sound input devices 4a to 4d are amplified by the signal amplification unit 13b, converted into digital signals by the signal conversion unit 12a, and then input to the arithmetic processing unit 9.
  • the mobile terminal 5 is a communication terminal such as a smart phone, a mobile phone, or a tablet PC, and can communicate with the sound output control device 2 to output the above-mentioned in-car call or media playback sound. Note that the sound output control device 2 also controls the sound output of a speaker included in the mobile terminal 5.
  • the portable terminal 5 communicates with the sound output control device 2 through the communication I / F unit 14. Examples of a communication method for the mobile terminal 5 to communicate with the sound output control device 2 include short-range wireless communication such as Bluetooth (registered trademark, the description is omitted below).
  • the vehicle information acquisition unit 6 acquires information on the state of the vehicle and the driving operation of the vehicle (hereinafter referred to as vehicle information).
  • vehicle information includes, for example, window opening / closing degree, engine speed, vehicle speed, accelerator opening, and vehicle position information.
  • vehicle information acquisition method it is conceivable that the vehicle information is acquired from an ECU (Electronic Control Unit) through an in-vehicle network.
  • the vehicle information acquisition part 6 may calculate vehicle information based on the information acquired from the vehicle side.
  • the vehicle speed may be a value calculated from a pulse signal output according to the rotation of the wheel.
  • the storage unit 7 stores sound output history information for each vehicle occupant.
  • the sound output history information the output volume set in the sound output device is registered for each occupant and each sound source.
  • the sound output history information is stored in the storage unit 7 in association with the estimation information of the atmosphere in the vehicle when the output volume is set to the sound output device.
  • the history information may be stored in the storage unit 7 in association with vehicle interior noise when the output volume is set to the sound output device.
  • the sound output history information is automatically created by the control unit 11 and stored in the storage unit 7. For example, when the volume is manually set by the occupant, the control unit 11 creates history information on the assumption that the volume is the optimum volume for the occupant, and the occupant's interior atmosphere, in-vehicle noise, The information is stored in the storage unit 7 in association with the identification information. If the volume is not set by the occupant, the control unit 11 creates history information on the assumption that the volume previously set according to the sound output device is the optimal volume for the occupant, and the atmosphere in the passenger's vehicle at this time Are stored in the storage unit 7 in association with vehicle interior noise and occupant identification information.
  • the photographing device 8 is a camera provided in the vehicle, and photographs the state inside the vehicle including the occupant. An image including an occupant's face cut out from an image photographed by the photographing device 8 is used for personal authentication, but the image may be used for estimation of the atmosphere in the vehicle. Image information photographed by the photographing device 8 is converted into an image signal by the signal conversion unit 12 c and output to the estimation unit 10 and the control unit 11.
  • the arithmetic processing unit 9 performs arithmetic processing related to sound output. For example, arithmetic processing such as echo cancellation and noise cancellation is performed.
  • the arithmetic processing unit 9 includes an estimation unit 10.
  • the estimation unit 10 estimates the atmosphere in each of a plurality of occupants of the vehicle.
  • the control unit 11 controls the sound output of the sound output devices 3a to 3d for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10.
  • the estimation part 10 and the control part 11 are the structures with which one sound output control apparatus 2 is provided, the estimation part 10 and the control part 11 may be provided in a separate apparatus. In this case, both apparatuses have a communication function capable of exchanging data with each other.
  • FIG. 2A is a block diagram illustrating a hardware configuration for realizing the function of the sound output control system 1 according to Embodiment 1.
  • FIG. 2B is a block diagram illustrating a hardware configuration that executes software that implements the functions of the sound output control system 1 according to the first embodiment.
  • a signal I / F 100 is the signal conversion unit 12a and the signal amplification unit 13a illustrated in FIG.
  • the in-vehicle speaker 101 is the sound output device 3a to 3d shown in FIG.
  • the signal I / F 102 is the signal conversion unit 12b and the signal amplification unit 13b illustrated in FIG.
  • the in-vehicle microphone 103 is the sound input devices 4a to 4d shown in FIG.
  • the communication I / F 104 is the communication I / F unit 14 shown in FIG.
  • the signal I / F 105 is the signal converter 12c shown in FIG.
  • the storage device 106 is the storage unit 7 shown in FIG.
  • the sound output control device 2 includes a processing circuit for executing the processes of step ST1 and step ST2 shown in FIG.
  • the processing circuit may be dedicated hardware or a CPU (Central Processing Unit) that executes a program stored in the memory.
  • the processing circuit 107 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), FPGA ( Field-Programmable Gate Array) or a combination thereof.
  • the function of the estimation unit 10 and the function of the control unit 11 may be realized by separate processing circuits, or these functions may be realized by a single processing circuit.
  • the functions of the estimation unit 10 and the control unit 11 are realized by software, firmware, or a combination of software and firmware.
  • Software or firmware is described as a program and stored in the memory 109.
  • the CPU 108 reads out and executes the program stored in the memory 109, thereby realizing the functions of the respective units.
  • the sound output control device 2 includes a memory 109 for storing a program that, when executed by the CPU 108, results in the process of step ST1 and the process of step ST2 shown in FIG. These programs cause a computer to execute the procedures or methods of the estimation unit 10 and the control unit 11.
  • the memory 109 may be, for example, a nonvolatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically-EPROM), or a volatile memory.
  • a nonvolatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically-EPROM), or a volatile memory.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • flash memory an EPROM (Erasable Programmable Read Only Memory)
  • EEPROM Electrically-EPROM
  • Magnetic disks, flexible disks, optical disks, compact disks, mini disks, DVDs, and the like are applicable.
  • each function of the estimation unit 10 and the control unit 11 may be realized by dedicated hardware, and a part may be realized by software or firmware.
  • the function of the estimation unit 10 is realized by a processing circuit as dedicated hardware, and the function of the control unit 11 is realized by the CPU 108 reading and executing a program stored in the memory 109. Also good.
  • the processing circuit can realize each of the above functions by hardware, software, firmware, or a combination thereof.
  • FIG. 3 is a flowchart showing the sound output control method according to Embodiment 1, and shows a series of processes for controlling the sound output for each occupant based on the atmosphere in the vehicle for each occupant.
  • the estimation unit 10 estimates the atmosphere in the vehicle for each occupant (step ST1).
  • the control unit 11 controls the sound output of the sound output devices 3a to 3d for each occupant based on the atmosphere in the vehicle for each occupant (step ST2).
  • the sound output is controlled for each occupant based on the atmosphere in the vehicle for each occupant, the sound can be output at a volume suitable for each occupant of the vehicle.
  • FIG. 4 is a block diagram showing the configuration of the sound output control device 2 according to Embodiment 1, and shows only the configuration for controlling the sound output.
  • the estimation unit 10 includes an acquisition unit 15, an analysis unit 16, and an atmosphere estimation unit 17.
  • the acquisition part 15 acquires the passenger
  • the acquisition unit 15 acquires an input sound signal as occupant information from the sound input devices 4a to 4d provided for each seat where an occupant sits in the vehicle.
  • the analysis unit 16 analyzes the occupant information acquired by the acquisition unit 15 and obtains information serving as a reference for estimating the atmosphere in the vehicle. For example, the analysis unit 16 acoustically analyzes an occupant's sound signal to obtain one of conversation content, voice tone, and utterance frequency.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16. For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency.
  • the control unit 11 determines the output volume for each occupant based on the estimated atmosphere in the vehicle for each occupant, and controls the sound output devices 3a to 3d so as to output sound at the determined output volume. For example, the output volume of a sound output device corresponding to an occupant who is active in conversation is increased, and the output volume of a sound output device corresponding to an occupant who is not active in conversation is decreased. Further, the volume of the sound output device corresponding to the sleeping passenger may be set to 0 (output stop).
  • the acquisition unit 15 may acquire in-vehicle image information captured by the imaging device 8.
  • the analysis unit 16 performs image analysis on the image information in the vehicle to obtain either the facial expression or the gesture of the passenger.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the facial expression and gesture of the occupant.
  • the atmosphere estimation unit 17 also estimates the atmosphere in the vehicle for each occupant based on the acoustic analysis results of the input sound signals acquired by the sound input devices 4a to 4d and the image analysis result of the image information in the vehicle. Good.
  • FIG. 5 is a flowchart showing the operation of the estimation unit 10 and shows the details of the processing corresponding to step ST1 of FIG.
  • FIG. 6 is a diagram showing an arrangement example of the sound output control system 1 according to the first embodiment, and shows a case where the sound output control system 1 is arranged in the vehicle 200.
  • the sound input device 4a is provided corresponding to the occupant A who is the driver
  • the sound input device 4b is provided corresponding to the occupant B of the front passenger seat
  • the occupant C of the rear seat is provided.
  • the sound input device 4c is provided
  • the sound input device 4d is provided corresponding to the occupant D of the rear seat.
  • the acquisition unit 15 acquires occupant information used for estimating the atmosphere in the vehicle for each occupant (step ST1a). For example, the acquisition unit 15 acquires sound signals input to the sound input devices 4a to 4d at all times during activation of the sound output control system 1 or periodically or always after an event that serves as a trigger occurs. . Thereby, the sound around the seat for each occupant or the uttered voice for each occupant is acquired as a sound signal. Examples of the event include an operation that starts sound generation such as an in-car call start operation and a media playback start operation.
  • the analysis part 16 analyzes the passenger
  • the analysis unit 16 acoustically analyzes the sound signal to extract the utterance voice, recognizes the voice, and obtains the result of analyzing the presence / absence of a specific keyword as the content of the conversation for each occupant.
  • the specific keyword is a keyword that serves as a reference indicating the degree of excitement between the passengers, and includes “interesting” and “not interested”.
  • the analysis unit 16 may acoustically analyze the sound signal to obtain the tone of the occupant's utterance voice, or may obtain the magnitude of the utterance voice. Furthermore, the analysis unit 16 may count the number of times the utterance voice can be extracted by acoustic analysis of the sound signal for each occupant, and obtain the utterance frequency for each occupant from the number of counts.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST3a). For example, table data in which keywords and information indicating the atmosphere in the vehicle are associated is stored in the storage unit 7. The atmosphere estimation unit 17 estimates the atmosphere corresponding to the keyword extracted from the sound signal with reference to the table data read from the storage unit 7. If the passenger's voice includes the keyword “interesting”, the passenger is estimated to have a “fun” atmosphere. In addition, the atmosphere estimation unit 17 presumes that the utterance voice tone is higher than the threshold value and actively talks and assumes that the atmosphere is “fun”, and if the utterance voice tone is equal to or less than the threshold value, the atmosphere estimation unit 17 actively talks. It may be presumed that the atmosphere is “bottom”. Further, when the utterance frequency is equal to or lower than the threshold value, the atmosphere estimation unit 17 may determine that the occupant is not participating in the conversation or is sleeping, and may estimate that the atmosphere is “quiet”.
  • the acquisition unit 15 may acquire in-vehicle image information captured by the imaging device 8 in addition to or instead of the sound signals input to the sound input devices 4a to 4d. It is assumed that all the passengers in the vehicle are photographed in this image information.
  • the analysis unit 16 analyzes the image information to extract an image area for each occupant, analyzes the occupant image area, analyzes the occupant's facial expressions, and gestures. Ask for either.
  • the facial expression of the occupant can be identified by comparing the occupant's face image analyzed from the image with the facial expression image pattern. It is possible to identify the occupant's gesture in the same way.
  • the gestures identified here are the gestures that serve as the standard for estimating the occupant's aggressiveness with respect to conversations, such as “operating a smartphone” and “facing the face to the window”. Can be mentioned.
  • the atmosphere estimation unit 17 estimates the atmosphere corresponding to the facial expression and gesture specified from the image information with reference to the table data in which the facial expression and gesture and the information indicating the atmosphere are associated with each other. For example, if the facial expression is a smile, this occupant is presumed to be in a “fun” atmosphere, and if the expression is lying down, the occupant is presumed to be in a “dull” atmosphere.
  • the atmosphere estimation unit 17 may estimate the final atmosphere of the occupant by considering the analysis result of the sound signal and the analysis result of the image information in a complementary manner. For example, if the utterance frequency is below the threshold but the facial expression is laughing, it is determined that the occupant is talking or playing media, not sleeping, and is presumed to have a “fun” atmosphere .
  • the control unit 11 determines the output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10, and outputs the sound with the determined output volume. Control the sound output device. For example, the control unit 11 calculates the output volume of the sound output device for each occupant by multiplying the reference volume for each sound output device by the weight set for each atmosphere in the vehicle. Further, table data in which the atmosphere in the vehicle and the volume of the sound output device are associated with each other is stored in the storage unit 7, and the control unit 11 refers to the table data read from the storage unit 7, for each occupant. The output volume of the sound output device may be selected.
  • control unit 11 corrects the output volume for each occupant obtained based on the atmosphere in the vehicle in consideration of the atmosphere in the vehicle for each passenger and the positional relationship between the passengers in the vehicle. For example, when the volume of the sound output device corresponding to the occupant y seated next to the occupant x is increased, the increase is more asleep than when the occupant x is awake ("quiet" atmosphere) Is corrected to be smaller.
  • an occupant A in the driver's seat and an occupant D in the rear seat have a conversation by in-car call
  • an occupant B in the front passenger seat watches the in-car television
  • crew D is talking actively
  • crew A is the state which concentrates on driving
  • the passenger D who is active in the conversation is presumed to have a “fun” atmosphere, and the passenger A who is not active in the conversation is presumed to have a “dull” atmosphere.
  • the control unit 11 decreases the output volume of the sound output device 3a corresponding to the passenger A who is estimated to have a “bottom” atmosphere. Further, the control unit 11 increases the output volume of the sound output device 3b corresponding to the occupant B estimated to be the “fun” atmosphere, and the output of the sound output device 3d corresponding to the occupant D estimated to be the “fun” atmosphere. Increase the volume. Further, the control unit 11 sets the output volume of the sound output device 3c corresponding to the occupant C estimated to be a “quiet” atmosphere to 0 (output stop).
  • the control unit 11 determines the increase in the output volume of the sound output device 3d as the output volume of the sound output device 3b corresponding to the occupant B. You may correct
  • control unit 11 determines the presence / absence of an occupant in the vehicle based on an image signal obtained by photographing the inside of the vehicle
  • the control unit 11 performs control so as to start the process of FIG. 5 when determining that there are a plurality of occupants in the vehicle. May be.
  • Sensor information such as a weight sensor provided in the seat may be used for the determination. Thereby, the power consumption of the sound output control device 2 can be suppressed.
  • FIG. 7 is a diagram showing another arrangement example of the sound output control system 1, and shows a case where the sound output control system 1 is arranged in the vehicle 200.
  • the sound input devices 4a to 4d are provided corresponding to the seats on which the occupants A to D are seated, and the occupant D in the rear seat carries the portable terminal 5.
  • the control of the sound output of the in-vehicle call performed by the occupant A and the occupant D via the sound output control device 2 and the portable terminal 5 will be described.
  • FIG. 8 is a flowchart showing call processing by the mobile terminal 5, and shows a series of processing in in-car call using the mobile terminal 5.
  • the mobile terminal 5 searches the sound output control device 2 to check whether the sound output control device 2 has been detected (step ST1b). If the sound output control device 2 is not detected (step ST1b; NO), the portable terminal 5 repeats the search for the sound output control device 2.
  • the mobile terminal 5 establishes a communication connection with the sound output control device 2 and transmits position information of the mobile terminal 5 (step ST2b).
  • the position information of the portable terminal 5 is information for specifying the occupant, and may be information indicating a seat on which the occupant D is seated. Subsequently, the mobile terminal 5 collects sound around the rear seat on which the occupant D is seated, and transmits this input sound signal to the sound output control device 2 (step ST3b).
  • the portable terminal 5 starts an in-car call via the sound output control device 2 (step ST4b).
  • the utterance voice of the occupant A is input to the sound input device 4 and output from the speaker of the portable terminal 5, and the utterance voice of the occupant D is input to the microphone of the portable terminal 5 and output as sound. Sound is output from the device 3a.
  • a control signal is transmitted from the sound output control device 2 to the portable terminal 5, and the output volume of the speaker of the portable terminal 5 becomes a volume corresponding to this control signal.
  • the portable terminal 5 checks whether or not the in-car call has ended (step ST5b). If not completed (step ST5b; NO), the process returns to step ST4b. If the call has ended (step ST5b; YES), the process of FIG. 8 is ended.
  • FIG. 9 is a flowchart showing a sound output control process of the mobile terminal 5 by the sound output control device 2, and shows a series of processes in an in-car call using the mobile terminal 5.
  • the acquisition unit 15 of the sound output control device 2 searches the mobile terminal 5 via the communication I / F unit 14 and confirms whether the mobile terminal 5 has been detected (step ST1c). If the mobile terminal 5 is not detected (step ST1c; NO), the acquisition unit 15 repeats the search for the mobile terminal 5.
  • the communication I / F unit 14 When the mobile terminal 5 is detected (step ST1c; YES), the communication I / F unit 14 establishes a communication connection with the mobile terminal 5 and receives the position information of the mobile terminal 5 (step ST2c). Subsequently, the communication I / F unit 14 receives the input sound signal from the portable terminal 5 (step ST3c). The acquisition unit 15 acquires the position information of the mobile terminal 5 and the input sound signal received by the communication I / F unit 14 and outputs them to the analysis unit 16.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST4c).
  • the acquisition unit 15 acquires the sound signals input to the sound input devices 4a to 4d in addition to the sound signal received from the mobile terminal 5, and outputs the acquired sound signals to the analysis unit 16.
  • the analysis unit 16 acoustically analyzes input sound signals from the mobile terminal 5 and the sound input device 4d to obtain one of conversation content, voice tone, and utterance frequency. Similarly, the analysis unit 16 performs acoustic analysis on the sound signals input to the sound input devices 4a to 4c. Based on these analysis results, the atmosphere estimation unit 17 estimates the atmosphere inside the passengers A to D.
  • the control unit 11 controls the call volume based on the atmosphere inside the passengers A to D estimated by the atmosphere estimation unit 17 (step ST5c). For example, when the control unit 11 determines the output volume according to the atmosphere in the passenger A's vehicle, the controller 11 corrects the output volume in consideration of the atmosphere in the passenger B sitting next to the passenger A. The sound output device 3a is controlled to output sound at the corrected output volume. When the control unit 11 determines the output sound volume corresponding to the occupant D's interior atmosphere, the control unit 11 corrects the output volume in consideration of the interior of the occupant C seated next to the occupant D, and then corrects the output volume. Create a control signal to output sound at the output volume. The control signal is transmitted to the mobile terminal 5 by the communication I / F unit 14.
  • the control unit 11 confirms whether or not the in-car call has ended (step ST6c). If not completed (step ST6c; NO), the process returns to step ST4c. If the call has ended (step ST6c; YES), the process of FIG. 9 is ended.
  • step ST6c when the mobile terminal 5 is a control target, it is not necessary to provide an in-vehicle speaker and an in-vehicle microphone for an occupant who uses the mobile terminal 5. Therefore, when all the occupants are provided with the portable terminal 5 and control these sound outputs, the vehicle-mounted speaker and the vehicle-mounted microphone are not required, so that the cost required for introducing the sound output control system 1 can be reduced.
  • the sound output control device 2 is based on the estimation unit 10 that estimates the atmosphere in the vehicle for each occupant of the vehicle 200, and the atmosphere in the vehicle for each occupant estimated by the estimation unit 10. And a control unit 11 that controls sound output for each occupant.
  • the control part 11 controls the sound output of the in-vehicle call performed between the passengers of the vehicle 200, and controls the output of the reproduction sound of the media. Since the sound output is controlled for each occupant by these configurations, the sound can be output at a volume suitable for each occupant of the vehicle 200.
  • FIG. FIG. 10 is a block diagram showing a configuration of a sound output control device 2A according to Embodiment 2 of the present invention.
  • the sound output control device 2A includes an estimation unit 10A and a control unit 11A as a configuration for controlling sound output.
  • the estimation unit 10A includes an acquisition unit 15A, an analysis unit 16, an atmosphere estimation unit 17, and a noise estimation unit 18.
  • the acquisition unit 15A acquires information used for estimation of the atmosphere in the vehicle and estimation of the noise in the vehicle for each occupant. For example, the acquisition unit 15A acquires, as occupant information, the sound signal input by the sound input device in the vehicle and the image information in the vehicle photographed by the photographing device 8 as in the first embodiment. The acquisition unit 15A acquires the vehicle information acquired by the vehicle information acquisition unit 6 as information used for estimation of in-vehicle noise.
  • the vehicle information is information related to the state of the vehicle and the driving operation of the vehicle, and includes, for example, window opening, engine speed, vehicle speed, accelerator opening, and vehicle position information.
  • the noise estimation unit 18 estimates vehicle interior noise for each occupant based on at least one of the vehicle information and the sound signal input to the sound input device. For example, the noise estimation unit 18 estimates the road noise level using the traveling state and traveling location of the vehicle specified from the vehicle information, and estimates the wind noise level using the opening of the window near the passenger. The noise estimation unit 18 uses the noise obtained by adding the wind noise level to the road noise level as the in-vehicle noise for each occupant. Further, the noise estimation unit 18 uses the signal level obtained by removing the sound level of the utterance voice and the media reproduction sound acquired by the acquisition unit 15 from the sound signal input to the sound input device as the vehicle interior noise level for each occupant. Good.
  • the control unit 11A outputs the sound output of the sound output devices 3a to 3d for each occupant based on the vehicle interior for each occupant estimated by the atmosphere estimation unit 17 and the vehicle interior noise for each occupant estimated by the noise estimation unit 18. Control. For example, the control unit 11A may select the output volume of the sound output device for each occupant from the table data in which the output volume of the sound output device is associated with the vehicle interior and the vehicle interior noise.
  • Each function of the estimation unit 10A and the control unit 11A in the sound output control device 2A is realized by a processing circuit. That is, the sound output control device 2A includes a processing circuit for executing these functions. As shown in FIGS. 2A and 2B, the processing circuit may be dedicated hardware or a CPU that executes a program stored in a memory.
  • FIG. 11 is a flowchart showing the operation of the estimation unit 10A, and shows details of processing corresponding to step ST1 in FIG.
  • the acquisition unit 15A acquires occupant information used to estimate the atmosphere in the vehicle and vehicle information used to estimate vehicle noise (step ST1d).
  • the acquisition unit 15A obtains the sound signal input to the sound input device and the vehicle information at all times during the activation of the sound output control system 1 or periodically or always after the trigger event occurs. get.
  • sounds around the passenger's seat are acquired as sound signals, and vehicle information is acquired.
  • the event includes an operation that is a starting point for generating a sound output, such as an in-car call start operation and a media playback start operation.
  • the acquisition unit 15 ⁇ / b> A may acquire in-vehicle image information captured by the image capturing device 8. It is assumed that all the passengers in the vehicle are photographed in this image information.
  • the analysis unit 16 analyzes the occupant information acquired by the acquisition unit 15A, and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant (step ST2d). This process is the same as step ST2a in FIG.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST3d). This process is also the same as step ST3a in FIG.
  • the noise estimation unit 18 estimates in-vehicle noise for each occupant based on at least one of vehicle information and occupant information (step ST4d). For example, road noise table data in which the vehicle running state and location and road noise level are associated with each other, and wind noise table data in which the window opening and the wind noise level are associated with each other are stored in the storage unit 7.
  • the noise estimation unit 18 refers to the road noise table data read from the storage unit 7 and estimates the road noise level corresponding to the traveling state and the traveling location of the vehicle specified from the vehicle information.
  • the road noise level is a base level of the noise level in the vehicle.
  • the noise estimation unit 18 refers to the table data for wind noise read from the storage unit 7 and estimates the wind noise level corresponding to the opening degree of the window near the passenger for each passenger.
  • the vehicle interior noise level for each occupant is, for example, a sound level obtained by adding a wind noise level to a road noise level.
  • the noise estimation unit 18 obtains a sound level obtained by removing the level of the uttered voice of the occupant or the media playback sound from the sound level of the sound signal input to the sound input devices 4a to 4d provided for each seat in the vehicle.
  • the in-vehicle noise may be estimated for each passenger.
  • the control unit 11A determines the output volume for each occupant based on the in-vehicle atmosphere and the in-vehicle noise estimated by the estimation unit 10A, and outputs sound with the determined output volume.
  • the control unit 11A multiplies the reference volume for each sound output device by the weight set for each atmosphere in the vehicle, thereby setting the output volume of the sound output device for each occupant. calculate.
  • 11 A of control parts may select the output volume of the sound output device for every passenger
  • the control unit 11A corrects the output volume so that the output volume obtained as described above is larger than the in-vehicle noise level.
  • the control unit 11A controls the degree of increasing the output volume in consideration of the atmosphere in the vehicle for each occupant and the positional relationship between the occupants in the vehicle. For example, when the volume of the sound output device corresponding to the occupant y seated next to the occupant x is increased, the increase is more asleep than when the occupant x is awake ("quiet" atmosphere) Is corrected to be smaller.
  • control unit 11A determines the presence or absence of an occupant in the vehicle based on the image signal obtained by photographing the inside of the vehicle
  • the control unit 11A performs control so as to start the process of FIG. 11 when determining that there are a plurality of occupants in the vehicle. May be.
  • Sensor information such as a weight sensor provided in the seat may be used for the determination. Thereby, the power consumption of the sound output control device 2A can be suppressed.
  • the control unit 11A controls the sound output for each occupant based on the in-vehicle noise for each occupant in addition to the in-vehicle atmosphere for each occupant. . Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
  • FIG. 12 is a block diagram showing a configuration of a sound output control device 2B according to Embodiment 3 of the present invention. 12, the same components as those in FIGS. 4 and 10 are denoted by the same reference numerals, and the description thereof is omitted.
  • the sound output control device 2B includes an estimation unit 10 or an estimation unit 10A and a control unit 11B as a configuration for controlling sound output.
  • the control unit 11B includes a determination unit 19, a history information acquisition unit 20, and a volume control unit 21.
  • the determination part 19 determines the presence or absence of a passenger
  • the determination target occupant is specified by the result of acoustic analysis of the sound signal for each occupant.
  • the history information acquisition unit 20 reads out the history information of the sound output of the occupant determined by the determination unit 19 as having experience in riding from the storage unit 7 and outputs the history information to the volume control unit 21. Instead, the sound output default information is output to the volume control unit 21.
  • the default information is information indicating a default value of the sound output device, and is stored in the storage unit 7 in association with each sound output device.
  • the volume control unit 21 is based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10 or the estimation unit 10A, and the sound output history information or default information for each occupant acquired by the history information acquisition unit 20, The sound output of the sound output device for each passenger is controlled.
  • the functions of the estimation unit 10 or the estimation unit 10A and the control unit 11B in the sound output control device 2B are realized by a processing circuit. That is, the sound output control device 2B includes a processing circuit for executing these functions. As shown in FIGS. 2A and 2B, the processing circuit may be dedicated hardware or a CPU that executes a program stored in a memory.
  • FIG. 13 is a flowchart showing the operation of the control unit 11B, and shows details of processing corresponding to step ST2 of FIG.
  • the structure which the control part 11B connected with the estimation part 10 is mentioned as an example as the sound output control apparatus 2B.
  • the history information acquisition unit 20 acquires the estimation information of the atmosphere in the vehicle for each occupant estimated by the estimation unit 10 (step ST1e).
  • the determination unit 19 specifies an occupant from the image signal obtained by photographing the inside of the vehicle, and determines whether the specified occupant has a boarding experience (step ST2e). For example, information indicating the face pattern of an occupant having a boarding experience is stored in the storage unit 7. The determination unit 19 compares the face pattern stored in the storage unit 7 with the face image of the occupant obtained by image analysis of the image signal, and the facial features and face pattern extracted from the face image match. In this case, it is determined that the passenger has a riding experience.
  • the determination unit 19 compares the acoustic characteristics of the occupant's voice obtained by acoustic analysis of the sound signal of the uttered voice input to the sound input device in the vehicle and the acoustic characteristics of the occupant's voice with riding experience. If they match, it may be determined that the occupant has a riding experience.
  • the history information acquisition unit 20 acquires history information of sound output corresponding to the occupant from the storage unit 7 (step ST3e). .
  • the output volume set in the sound output device is registered for each occupant and for each sound source, and the identification information of the occupant is associated. Yes.
  • the history information acquisition unit 20 acquires history information corresponding to the occupant identification information from the storage unit 7.
  • the history information acquisition unit 20 stores the default information of the sound output device provided corresponding to the occupant's seat in the storage unit 7 (Step ST4e). The sound output history information or default information for each occupant thus obtained is output to the volume control unit 21.
  • the volume control unit 21 determines the output volume of the sound output device for each occupant based on the atmosphere in the vehicle for each occupant and the sound output history information or default value (step ST5e). First, the volume control unit 21 confirms whether the occupant whose atmosphere is estimated by the estimation unit 10 is the occupant corresponding to the sound output history information or the default information. The estimation unit 10 obtains the above estimation information of the occupant for each seat. Therefore, the volume control unit 21 determines whether the occupant of the estimation information is the history information of the sound output or the occupant seat of the estimation information based on the result of comparing the seat of the occupant in the estimation information with the seat corresponding to the sound output history information or default information. Determine which of the occupants is the default information.
  • the volume control unit 21 considers the atmosphere in the vehicle for each occupant and the positional relationship between the occupants in the vehicle, and outputs a value obtained by correcting the output volume indicated by the sound output history information or the default information after the control. Volume. For example, when the output volume indicated by the history information of the occupant y seated next to the occupant x is larger than the current output volume, the volume control unit 21 outputs the history information if the occupant x has occurred. Increase to volume. On the other hand, if the occupant x is asleep, the volume control unit 21 sets a value smaller than the output volume indicated by the history information as the output volume of the sound output device corresponding to the occupant y.
  • the volume control unit 21 controls the sound output device to output sound at the output volume for each occupant determined in step ST5e (step ST6e).
  • the sound output history information is automatically created by the volume control unit 21 and stored in the storage unit 7. For example, when the volume is manually set by the occupant, the volume control unit 21 creates history information on the assumption that the volume is the optimal volume for the occupant.
  • the data are stored in the storage unit 7 in association with each other.
  • the volume control unit 21 When the volume is not set by the occupant, the volume control unit 21 creates history information on the assumption that the volume previously set according to the sound output device is the optimal volume for the occupant, and the passenger's interior atmosphere at this time The information is stored in the storage unit 7 in association with the in-vehicle noise.
  • control unit 11B determines the presence or absence of an occupant in the vehicle based on the image signal obtained by photographing the inside of the vehicle
  • the control unit 11B performs control so as to start the process of FIG. 13 when determining that there are a plurality of occupants in the vehicle. May be.
  • Sensor information such as a weight sensor provided in the seat may be used for the determination. Thereby, the power consumption of the sound output control device 2B can be suppressed.
  • the control unit 11B determines the sound output for each occupant based on the sound output history information for each occupant in addition to the atmosphere in the vehicle for each occupant. To control. Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
  • the control unit 11B may control the sound output for each occupant based on the atmosphere in the vehicle for each occupant, the in-vehicle noise for each occupant, and the sound output history information.
  • the sound output history information stored in the storage unit 7 is associated with vehicle interior noise when the output sound volume is set in addition to the estimation information of the atmosphere in the vehicle.
  • the history information acquisition unit 20 acquires history information for each occupant from the storage unit 7 based on the estimation information of the atmosphere in the vehicle and the noise in the vehicle estimated by the estimation unit 10A.
  • the volume control unit 21 determines the output volume of the sound output device for each occupant based on the estimated information of the atmosphere in the vehicle for each occupant, the estimated value of the in-vehicle noise, and the history information or default value of the sound output. Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
  • Embodiment 4 FIG.
  • the fourth embodiment a case where sound output in an in-car call performed between one-to-one occupants and a case where sound output in an in-car call performed between one-to-one occupants are controlled are described. Since the configurations of the sound output control system and the sound output control device according to the fourth embodiment are the same as those of the first embodiment, the configurations of the sound output control system and the sound output control device will be described below with reference to FIGS. 4 is referred to.
  • FIG. 14 is a diagram illustrating an arrangement example of the sound output control system 1 according to the fourth embodiment.
  • a vehicle 200A shown in FIG. 14 is a three-row seat vehicle.
  • the sound input device 4 is provided corresponding to the occupant A who is a driver, and the occupant in the second row seats has the portable terminal 5a, and the occupant in the third row seat has the portable terminal 5b.
  • the portable terminal 5a and the portable terminal 5b have the same configuration as the portable terminal 5 shown in FIG. 1 and function in the same manner.
  • a sound output device 3a is provided corresponding to the occupant A who is the driver, a sound output device 3b is provided corresponding to the occupant B in the passenger seat, and a sound corresponding to the seat where the occupant C in the second row is seated.
  • An output device 3c is provided, and a sound output device 3d is provided corresponding to the seat on which the passenger D in the second row is seated.
  • a sound output device 3e is provided corresponding to the seat on which the third row of passengers E is seated, and a sound output device 3f is provided corresponding to the seat on which the third row of passengers F is seated.
  • FIG. 15 is a flowchart showing sound output control processing in an in-car call performed between a plurality of occupants.
  • in-car calls performed between a plurality of occupants will be referred to as public-mode in-car calls, and sound output control in public-mode in-car calls performed by occupant A and occupants B to F will be described.
  • the arithmetic processing unit 9 detects the sound input device of the parent device (step ST1f). For example, the arithmetic processing unit 9 detects a sound input device or a portable terminal set in advance as a parent device in the public mode, or a sound input device of a device in which a public mode in-car call start operation is performed or The portable terminal is detected as the sound input device of the parent device.
  • the sound input device 4 is the sound input device of the parent device
  • the sound output device 3a is the sound output device of the parent device.
  • the control unit 11 masks (blocks) a call using a sound input device other than the parent device (step ST2f). As a result, only the call between the child device and the parent device becomes valid, and the in-car call in the public mode performed by the passenger A and the passenger B to the passenger F is started. In the in-car call in the public mode, a two-way conversation between the parent device and the child device is possible.
  • the acquisition unit 15 acquires input sound signals input to the sound input device 4 of the parent device and the microphones of the portable terminals 5a and 5b that are child devices (step ST3f). Thereby, the acquisition unit 15 receives the sound around the first row of seats input to the sound input device 4, the sound around the second row of seats input to the microphone of the mobile terminal 5a, and the microphone of the mobile terminal 5b. The sound around the third row of seats input to is acquired as a sound signal.
  • the analysis unit 16 acoustically analyzes the sound signal acquired by the acquisition unit 15 and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on information serving as an estimation criterion for the atmosphere in the vehicle obtained by the analysis unit 16 (step ST4f). For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency. Moreover, the atmosphere estimation part 17 may estimate the atmosphere in a vehicle for every passenger
  • the control unit 11 determines an output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the atmosphere estimation unit 17, and outputs a sound with the determined output volume.
  • the sound output with the sound output device of the slave unit is controlled (step ST5f).
  • the sound output device of the child device may be a speaker provided in the mobile terminals 5a and 5b, but may also be the sound output devices 3b to 3f corresponding to the passengers B to F.
  • control is performed to increase the output volume of the sound output device corresponding to an occupant who is active in conversation and to decrease the output volume of the sound output device corresponding to an occupant who is not active in conversation.
  • the volume of the sound output device corresponding to the sleeping passenger is set to 0 (output stop).
  • step ST6f the control unit 11 confirms whether or not the in-car call in the public mode is finished. If not completed (step ST6f; NO), the process returns to step ST3f. If the telephone call has ended (step ST6f; YES), the processing in FIG. 15 is ended. As a result, in-car calls in public mode can be performed at a volume suitable for each occupant.
  • the control unit 11 of the sound output control device 2 synthesizes the media reproduction sound reproduced by the media player of the parent machine as the output sound, and uses the output volume determined based on the atmosphere in the vehicle for each occupant. The synthesized sound is output to the sound output device.
  • FIG. 16 is a diagram illustrating another arrangement example of the sound output control system 1 according to the fourth embodiment.
  • a vehicle 200B shown in FIG. 16 is a vehicle having three or more rows of seats.
  • the sound input device 4 is provided corresponding to the occupant A who is the driver, and the occupant in the last row of seats has the portable terminal 5c.
  • the portable terminal 5c has the same configuration as the portable terminal 5 shown in FIG. 1, and functions in the same manner.
  • a sound output device 3a is provided corresponding to the occupant A who is the driver, a sound output device 3b is provided corresponding to the occupant B in the passenger seat, and a sound corresponding to the seat where the occupant C in the second row is seated.
  • An output device 3c is provided, and a sound output device 3d is provided corresponding to the seat on which the passenger D in the second row is seated.
  • a sound output device 3i is provided corresponding to the seat on which the occupant I in the last row is seated, and a sound output device 3j is provided corresponding to the seat on which the occupant J is seated.
  • FIG. 17 is a flowchart showing a sound output control process in an in-car call performed between one-on-one passengers.
  • in-car calls performed between one-to-one occupants will be referred to as private mode in-car calls, and sound output control in private mode in-car calls performed by occupants A and J will be described.
  • the arithmetic processing unit 9 detects the sound input device of the parent device (step ST1g). For example, the arithmetic processing unit 9 detects a sound input device or portable terminal set in advance as a parent device in the private mode, or a sound input device or portable terminal of a device in which a private mode in-car call start operation is executed Is detected.
  • the sound input device 4 is the sound input device of the parent device
  • the sound output device 3a is the sound output device of the parent device.
  • the arithmetic processing part 9 selects a subunit
  • the arithmetic processing unit 9 detects a sound input device or a portable terminal set in advance as a private mode child device, or a sound input device of a device in which a private mode in-car call start operation is performed or A mobile terminal is detected as a slave unit.
  • the portable terminal 5c is a child device.
  • the acquisition unit 15 acquires the input sound signal input to the sound input device 4 of the parent device and the microphone of the portable terminal 5c that is the child device (step ST3g). Thereby, the acquisition unit 15 acquires the sound around the first row of seats input to the sound input device 4 and the sound around the last row of seats input to the microphone of the mobile terminal 5c as sound signals.
  • the analysis unit 16 acoustically analyzes the sound signal acquired by the acquisition unit 15 and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant.
  • the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on information serving as an estimation criterion for the atmosphere in the vehicle obtained by the analysis unit 16 (step ST4g). For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency. Moreover, the atmosphere estimation part 17 may estimate the atmosphere in a vehicle for every passenger
  • control unit 11 determines the output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the atmosphere estimation unit 17, and outputs the sound at the determined output volume.
  • the sound output of 3a and the sound output of the sound output device of the slave unit are controlled (step ST5g).
  • the sound output device of the child device may be a speaker included in the mobile terminal 5c, but may be a sound output device 3j corresponding to the passenger J.
  • control is performed to increase the output volume of the sound output device corresponding to an occupant who is active in conversation and to decrease the output volume of the sound output device corresponding to an occupant who is not active in conversation. .
  • step ST6g the control unit 11 confirms whether or not the in-car call in the private mode is finished. If not completed (step ST6g; NO), the process returns to step ST3g. If the call has ended (step ST6g; YES), the processing in FIG. 17 is ended.
  • in-car calls in private mode can be performed at a volume suitable for each occupant.
  • the control unit 11 of the sound output control device 2 synthesizes the media reproduction sound reproduced by the media player of the parent machine as the output sound, and uses the output volume determined based on the atmosphere in the vehicle for each occupant. The synthesized sound is output to the sound output device.
  • the sound output control device 2 controls the sound output in the in-car call in the public mode and the in-car call in the private mode has been shown, the sound output control device 2A may perform the same control.
  • the apparatus 2B may perform the same control.
  • the control unit 11 controls the sound output in the in-car call in the public mode for each occupant.
  • in-car calls in public mode can be performed at a volume suitable for each occupant.
  • control unit 11 controls the sound output in the in-car call in the private mode for each occupant.
  • in-car calls in private mode can be performed at a volume suitable for each occupant.
  • the sound output control device may be provided in a portable terminal brought into the vehicle as long as the sound output for each occupant can be controlled, or may be mounted on a server that can communicate with the vehicle side.
  • any combination of each embodiment, any component of each embodiment can be modified, or any component can be omitted in each embodiment. .
  • the sound output control device can output sound at a volume suitable for each vehicle occupant, it is suitable for control of sound output during in-car calls or media reproduction in the vehicle between vehicle occupants. .
  • 1 sound output control system 2, 2A, 2B sound output control device, 3a-3j sound output device, 4, 4a-4d sound input device, 5, 5a-5c mobile terminal, 6 vehicle information acquisition unit, 7 storage unit, 8 imaging device, 9 arithmetic processing unit, 10, 10A estimation unit, 11, 11A, 11B control unit, 12a-12c signal conversion unit, 13a, 13b signal amplification unit, 14 communication I / F unit, 15, 15A acquisition unit, 16 analysis unit, 17 atmosphere estimation unit, 18 noise estimation unit, 19 determination unit, 20 history information acquisition unit, 21 volume control unit, 100, 102 signal I / F, 101 in-vehicle speaker, 103 in-vehicle microphone, 104 communication I / F , 106 storage device, 107 processing circuit, 108 CPU, 109 memory, 200, 200A, 200B vehicle.

Abstract

The present invention comprises an estimation unit (10) that estimates in-vehicle atmosphere for each of a plurality of occupants in a vehicle (200) and a control unit (11) that controls sound output for each occupant on the basis of the in-vehicle atmosphere for each occupant estimated by the estimation unit (10).

Description

音出力制御装置、音出力制御システムおよび音出力制御方法Sound output control device, sound output control system, and sound output control method
 この発明は、車両の複数の乗員のそれぞれに対する音出力を制御する音出力制御装置、音出力制御システムおよび音出力制御方法に関する。 The present invention relates to a sound output control device, a sound output control system, and a sound output control method for controlling sound output to each of a plurality of occupants of a vehicle.
 従来から、車内の雰囲気から乗員の状況を推定して、乗員の状況に見合った情報を提供する技術が提案されている。例えば、特許文献1に記載された車両用情報提供装置では、会話の有無、会話がある場合には笑い声、怒り声、乗員の発した言葉、同乗者の画像などから車内の雰囲気を推定し、さらに、車内の雰囲気から運転者の状況を推定している。 Conventionally, there has been proposed a technique for estimating the occupant situation from the atmosphere inside the vehicle and providing information corresponding to the occupant situation. For example, in the vehicle information providing apparatus described in Patent Document 1, the presence or absence of a conversation, if there is a conversation, laughing voice, angry voice, words spoken by the occupant, the passenger's atmosphere, etc. are estimated, Furthermore, the situation of the driver is estimated from the atmosphere in the vehicle.
特開2008-90509号公報JP 2008-90509 A
 特許文献1に代表される従来の技術では、車内の全体的な雰囲気を推定することから、乗員ごとに適した音量で音出力を制御できないという課題があった。
 車載マイクと車載スピーカとを使用した車内通話によって、運転者と後部座席の乗員とが会話しているが、助手席の乗員は眠っている場合を例に挙げる。
 この場合、特許文献1に記載された車両用情報提供装置は、助手席の乗員が眠っていても、運転者が発した笑い声などを検出すると、両者の会話が盛り上がって車内の雰囲気が活発であると推定する。
 この推定結果に基づいて両者の会話を補助するように車載スピーカが制御された場合、両者の発話音声を聞き取りやすくするために、運転席側の車載スピーカの音量と後部座席側の車載スピーカの音量がともに上げられる。このため、助手席の乗員は両者の発話音声によって睡眠が妨げられる可能性がある。
In the conventional technique represented by Patent Document 1, since the overall atmosphere in the vehicle is estimated, there is a problem that the sound output cannot be controlled with a sound volume suitable for each occupant.
An example is given in which the driver and the passenger in the rear seat are talking through the in-car call using the vehicle-mounted microphone and the vehicle-mounted speaker, but the passenger in the passenger seat is sleeping.
In this case, the vehicular information providing apparatus described in Patent Document 1 detects the laughing voice generated by the driver even when the passenger in the passenger seat is sleeping, and the conversation between the two is excited and the atmosphere in the vehicle is active. Presume that there is.
When the in-vehicle speakers are controlled to assist the conversation between the two based on the estimation result, the volume of the in-vehicle speaker on the driver's side and the volume of the in-vehicle speaker on the rear seat side are made easier to hear both utterances. Are raised together. For this reason, there is a possibility that the passenger in the passenger seat may be disturbed by the speech of both.
 また、上記車内通話によって運転者と後部座席の乗員とが会話しているが、運転者は、運転に集中して積極的に会話に参加していない場合を例に挙げる。
 この場合、特許文献1に記載された車両用情報提供装置では、運転者が積極的に会話に参加していなくても、後部座席の乗員が発した笑い声などを検出すると、両者の会話が盛り上がって車内の雰囲気が活発であると推定する。
 この推定結果に基づいて両者の会話を補助するように車載スピーカが制御された場合、運転者は会話に積極的に参加していないが、後部座席の乗員の発話音声を聞き取りやすくするために、運転席側の車載スピーカの音量が上げられる。これにより、運転者は、車載スピーカの出力音をうるさく感じることになる。
Moreover, although the driver and the occupant in the rear seat are talking through the in-car call, a case where the driver does not actively participate in the conversation while concentrating on driving is given as an example.
In this case, in the vehicular information providing apparatus described in Patent Document 1, even if the driver is not actively participating in the conversation, if the laughter or the like uttered by the passenger in the rear seat is detected, the conversation between the two is excited. It is estimated that the atmosphere in the car is active.
When the in-vehicle speaker is controlled to assist both conversations based on this estimation result, the driver is not actively participating in the conversation, but in order to make it easier to hear the utterance voice of the passenger in the rear seat, The volume of the in-vehicle speaker on the driver's seat side is increased. As a result, the driver feels annoying the output sound of the vehicle-mounted speaker.
 この発明は上記課題を解決するもので、車両の乗員ごとに適した音量で音出力することができる音出力制御装置、音出力制御システムおよび音出力制御方法を得ることを目的とする。 This invention solves the said subject, and aims at obtaining the sound output control apparatus, sound output control system, and sound output control method which can output a sound with the sound volume suitable for every passenger | crew of a vehicle.
 この発明に係る音出力制御装置は、車両の複数の乗員のそれぞれの車内の雰囲気を推定する推定部と、推定部によって推定された乗員ごとの車内の雰囲気に基づいて、音出力を乗員ごとに制御する制御部とを備える。 The sound output control device according to the present invention is configured to estimate an atmosphere in each of a plurality of occupants of a vehicle, and the sound output for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit. A control unit for controlling.
 この発明によれば、推定された乗員ごとの車内の雰囲気に基づいて、音出力を乗員ごとに制御するので、車両の乗員ごとに適した音量で音出力することができる。 According to the present invention, since sound output is controlled for each occupant based on the estimated atmosphere in the vehicle for each occupant, it is possible to output sound at a volume suitable for each occupant of the vehicle.
この発明の実施の形態1に係る音出力制御システムの構成を示すブロック図である。It is a block diagram which shows the structure of the sound output control system which concerns on Embodiment 1 of this invention. 図2Aは、実施の形態1に係る音出力制御システムの機能を実現するハードウェア構成を示すブロック図である。図2Bは、実施の形態1に係る音出力制御システムの機能を実現するソフトウェアを実行するハードウェア構成を示すブロック図である。FIG. 2A is a block diagram showing a hardware configuration for realizing the function of the sound output control system according to Embodiment 1. FIG. 2B is a block diagram illustrating a hardware configuration for executing software that implements the functions of the sound output control system according to Embodiment 1. 実施の形態1に係る音出力制御方法を示すフローチャートである。3 is a flowchart illustrating a sound output control method according to the first embodiment. 実施の形態1に係る音出力制御装置の構成を示すブロック図である。1 is a block diagram showing a configuration of a sound output control device according to Embodiment 1. FIG. 実施の形態1における推定部の動作を示すフローチャートである。3 is a flowchart illustrating an operation of an estimation unit according to Embodiment 1. 実施の形態1に係る音出力制御システムの配置例を示す図である。It is a figure which shows the example of arrangement | positioning of the sound output control system which concerns on Embodiment 1. FIG. 実施の形態1に係る音出力制御システムの他の配置例を示す図である。It is a figure which shows the other example of arrangement | positioning of the sound output control system which concerns on Embodiment 1. FIG. 携帯端末による通話処理を示すフローチャートである。It is a flowchart which shows the telephone call process by a portable terminal. 実施の形態1に係る音出力制御装置による携帯端末の音出力の制御処理を示すフローチャートである。4 is a flowchart showing a sound output control process of the mobile terminal by the sound output control device according to the first embodiment. この発明の実施の形態2に係る音出力制御装置の構成を示すブロック図である。It is a block diagram which shows the structure of the sound output control apparatus which concerns on Embodiment 2 of this invention. 実施の形態2における推定部の動作を示すフローチャートである。10 is a flowchart illustrating an operation of an estimation unit according to the second embodiment. この発明の実施の形態3に係る音出力制御装置の構成を示すブロック図である。It is a block diagram which shows the structure of the sound output control apparatus which concerns on Embodiment 3 of this invention. 実施の形態3における制御部の動作を示すフローチャートである。10 is a flowchart illustrating an operation of a control unit in the third embodiment. この発明の実施の形態4に係る音出力制御システムの配置例を示す図である。It is a figure which shows the example of arrangement | positioning of the sound output control system which concerns on Embodiment 4 of this invention. 1対複数の乗員同士で行われる車内通話における音出力の制御処理を示すフローチャートである。It is a flowchart which shows the control processing of the sound output in the in-vehicle call performed by 1 to several passenger | crew. この発明の実施の形態4に係る音出力制御システムの他の配置例を示す図である。It is a figure which shows the other example of arrangement | positioning of the sound output control system which concerns on Embodiment 4 of this invention. 1対1の乗員同士で行われる車内通話における音出力の制御処理を示すフローチャートである。It is a flowchart which shows the control process of the sound output in the in-vehicle call performed between passengers of 1: 1.
 以下、この発明をより詳細に説明するため、この発明を実施するための形態について、添付の図面に従って説明する。
実施の形態1.
 図1は、この発明の実施の形態1に係る音出力制御システム1の構成を示すブロック図である。音出力制御システム1は、車両の乗員ごとの音出力を制御するシステムであり、乗員同士で行われる車内通話またはメディア再生における音出力を制御する。
 音出力制御装置2は、音出力装置3a~3dの音出力を制御する装置であって、音入力装置4a~4d、携帯端末5、車両情報取得部6、記憶部7および撮影装置8のそれぞれから情報を取得することができる。また、音出力制御装置2は、機能構成として、演算処理部9、演算処理部9に含まれる推定部10、および制御部11を備える。
Hereinafter, in order to describe the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
Embodiment 1 FIG.
FIG. 1 is a block diagram showing the configuration of a sound output control system 1 according to Embodiment 1 of the present invention. The sound output control system 1 is a system that controls sound output for each occupant of a vehicle, and controls sound output in in-car calls or media playback performed between occupants.
The sound output control device 2 is a device that controls the sound output of the sound output devices 3a to 3d. Each of the sound input devices 4a to 4d, the portable terminal 5, the vehicle information acquisition unit 6, the storage unit 7, and the photographing device 8 is provided. You can get information from. Moreover, the sound output control apparatus 2 is provided with the arithmetic processing part 9, the estimation part 10 contained in the arithmetic processing part 9, and the control part 11 as a function structure.
 音出力装置3a~3dは、例えば、車内の座席ごとに配置された車載スピーカであり、電気信号である音信号を音に変換して出力する。出力音の音信号は、信号変換部12aによってアナログ信号に変換され、信号増幅部13aによって増幅された後に、音出力装置3a~3dのそれぞれから出力される。 The sound output devices 3a to 3d are, for example, in-vehicle speakers arranged for each seat in the vehicle, and convert the sound signal, which is an electrical signal, into sound and output it. The sound signal of the output sound is converted into an analog signal by the signal converter 12a, amplified by the signal amplifier 13a, and then output from each of the sound output devices 3a to 3d.
 音入力装置4a~4dは、例えば、車内の座席ごとに配置された車載マイクであって、入力された音を音信号に変換する。音入力装置4a~4dで変換された音信号は、信号増幅部13bによって増幅され、信号変換部12aによってデジタル信号に変換された後に演算処理部9に入力される。 The sound input devices 4a to 4d are, for example, in-vehicle microphones arranged for each seat in the vehicle, and convert the input sound into a sound signal. The sound signals converted by the sound input devices 4a to 4d are amplified by the signal amplification unit 13b, converted into digital signals by the signal conversion unit 12a, and then input to the arithmetic processing unit 9.
 携帯端末5は、スマートホン、携帯電話、タブレットPCといった通信端末であって、音出力制御装置2と通信接続して上記車内通話またはメディアの再生音を出力することができる。なお、音出力制御装置2は、携帯端末5が備えるスピーカの音出力も制御する。
 携帯端末5は、通信I/F部14を通じて音出力制御装置2との間で通信している。
 携帯端末5が音出力制御装置2と通信するための通信方式としては、例えば、Bluetooth(登録商標、以下記載を省略する)などの短距離無線通信が挙げられる。
The mobile terminal 5 is a communication terminal such as a smart phone, a mobile phone, or a tablet PC, and can communicate with the sound output control device 2 to output the above-mentioned in-car call or media playback sound. Note that the sound output control device 2 also controls the sound output of a speaker included in the mobile terminal 5.
The portable terminal 5 communicates with the sound output control device 2 through the communication I / F unit 14.
Examples of a communication method for the mobile terminal 5 to communicate with the sound output control device 2 include short-range wireless communication such as Bluetooth (registered trademark, the description is omitted below).
 車両情報取得部6は、車両の状態および車両の運転操作に関する情報(以下、車両情報と記載する)を取得する。車両情報には、例えば、窓の開閉度、エンジン回転数、車速、アクセル開度および車両の位置情報が含まれる。車両情報の取得方法は、車載ネットワークを通じて、ECU(Electronic Control Unit)から取得することが考えられる。また、車両情報取得部6は、車両側から取得した情報に基づいて、車両情報を算出してもよい。例えば、車速は、車輪の回転に応じて出力されるパルス信号から算出した値であってもよい。 The vehicle information acquisition unit 6 acquires information on the state of the vehicle and the driving operation of the vehicle (hereinafter referred to as vehicle information). The vehicle information includes, for example, window opening / closing degree, engine speed, vehicle speed, accelerator opening, and vehicle position information. As a vehicle information acquisition method, it is conceivable that the vehicle information is acquired from an ECU (Electronic Control Unit) through an in-vehicle network. Moreover, the vehicle information acquisition part 6 may calculate vehicle information based on the information acquired from the vehicle side. For example, the vehicle speed may be a value calculated from a pulse signal output according to the rotation of the wheel.
 記憶部7は、音出力の履歴情報を車両の乗員ごとに記憶する。音出力の履歴情報には、音出力装置に設定された出力音量が乗員ごとおよび音のソースごとに登録されている。音出力の履歴情報は、出力音量が音出力装置に設定されたときの車内の雰囲気の推定情報が対応付けられて記憶部7に記憶される。また、履歴情報は、出力音量が音出力装置に設定されたときの車内ノイズが対応付けられて記憶部7に記憶されてもよい。 The storage unit 7 stores sound output history information for each vehicle occupant. In the sound output history information, the output volume set in the sound output device is registered for each occupant and each sound source. The sound output history information is stored in the storage unit 7 in association with the estimation information of the atmosphere in the vehicle when the output volume is set to the sound output device. The history information may be stored in the storage unit 7 in association with vehicle interior noise when the output volume is set to the sound output device.
 音出力の履歴情報は、制御部11によって自動で作成されて記憶部7に記憶される。
 例えば、制御部11は、乗員によって音量が手動で設定された場合、その音量が乗員にとって最適な音量であるものとして履歴情報を作成し、そのときの乗員の車内の雰囲気と車内ノイズと乗員の識別情報とに対応付けて記憶部7に記憶する。
 また、乗員によって音量が設定されなければ、制御部11は、音出力装置に従前に設定された音量が乗員にとって最適な音量であるものとして履歴情報を作成し、このときの乗員の車内の雰囲気と車内ノイズと乗員の識別情報とに対応付けて記憶部7に記憶する。
The sound output history information is automatically created by the control unit 11 and stored in the storage unit 7.
For example, when the volume is manually set by the occupant, the control unit 11 creates history information on the assumption that the volume is the optimum volume for the occupant, and the occupant's interior atmosphere, in-vehicle noise, The information is stored in the storage unit 7 in association with the identification information.
If the volume is not set by the occupant, the control unit 11 creates history information on the assumption that the volume previously set according to the sound output device is the optimal volume for the occupant, and the atmosphere in the passenger's vehicle at this time Are stored in the storage unit 7 in association with vehicle interior noise and occupant identification information.
 撮影装置8は、車内に設けられたカメラであって、乗員を含む車内の様子を撮影する。撮影装置8によって撮影された画像から切り出された乗員の顔を含む画像は、個人認証に用いられるが、上記画像を車内の雰囲気の推定に用いてもよい。
 撮影装置8によって撮影された画像情報は、信号変換部12cによって画像信号に変換されて推定部10および制御部11に出力される。
The photographing device 8 is a camera provided in the vehicle, and photographs the state inside the vehicle including the occupant. An image including an occupant's face cut out from an image photographed by the photographing device 8 is used for personal authentication, but the image may be used for estimation of the atmosphere in the vehicle.
Image information photographed by the photographing device 8 is converted into an image signal by the signal conversion unit 12 c and output to the estimation unit 10 and the control unit 11.
 演算処理部9は、音出力に関する演算処理を行う。例えば、エコーキャンセル、ノイズキャンセルといった演算処理が行われる。また、演算処理部9は、推定部10を備える。
 推定部10は、車両の複数の乗員のそれぞれの車内の雰囲気を推定する。
 制御部11は、推定部10によって推定された乗員ごとの車内の雰囲気に基づいて、音出力装置3a~3dの音出力を乗員ごとに制御する。
 推定部10および制御部11は、1つの音出力制御装置2が備える構成であるが、推定部10と制御部11とが別々の装置に備えられてもよい。この場合、両方の装置は、互いにデータのやり取りが可能な通信機能を有している。
The arithmetic processing unit 9 performs arithmetic processing related to sound output. For example, arithmetic processing such as echo cancellation and noise cancellation is performed. The arithmetic processing unit 9 includes an estimation unit 10.
The estimation unit 10 estimates the atmosphere in each of a plurality of occupants of the vehicle.
The control unit 11 controls the sound output of the sound output devices 3a to 3d for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10.
Although the estimation part 10 and the control part 11 are the structures with which one sound output control apparatus 2 is provided, the estimation part 10 and the control part 11 may be provided in a separate apparatus. In this case, both apparatuses have a communication function capable of exchanging data with each other.
 図2Aは、実施の形態1に係る音出力制御システム1の機能を実現するハードウェア構成を示すブロック図である。図2Bは、実施の形態1に係る音出力制御システム1の機能を実現するソフトウェアを実行するハードウェア構成を示すブロック図である。
 図2Aおよび図2Bにおいて、信号I/F100は、図1に示した信号変換部12aと信号増幅部13aである。車載スピーカ101は、図1に示した音出力装置3a~3dである。信号I/F102は、図1に示した信号変換部12bと信号増幅部13bである。車載マイク103は、図1に示した音入力装置4a~4dである。通信I/F104は、図1に示した通信I/F部14である。信号I/F105は、図1に示した信号変換部12cである。記憶装置106は、図1に示した記憶部7である。
FIG. 2A is a block diagram illustrating a hardware configuration for realizing the function of the sound output control system 1 according to Embodiment 1. FIG. 2B is a block diagram illustrating a hardware configuration that executes software that implements the functions of the sound output control system 1 according to the first embodiment.
2A and 2B, a signal I / F 100 is the signal conversion unit 12a and the signal amplification unit 13a illustrated in FIG. The in-vehicle speaker 101 is the sound output device 3a to 3d shown in FIG. The signal I / F 102 is the signal conversion unit 12b and the signal amplification unit 13b illustrated in FIG. The in-vehicle microphone 103 is the sound input devices 4a to 4d shown in FIG. The communication I / F 104 is the communication I / F unit 14 shown in FIG. The signal I / F 105 is the signal converter 12c shown in FIG. The storage device 106 is the storage unit 7 shown in FIG.
 音出力制御装置2における推定部10および制御部11の各機能は、処理回路によって実現される。すなわち、音出力制御装置2は、図3に示すステップST1およびステップST2の処理を実行するための処理回路を備える。処理回路は、専用のハードウェアであっても、メモリに記憶されたプログラムを実行するCPU(Central Processing Unit)であってもよい。 Each function of the estimation unit 10 and the control unit 11 in the sound output control device 2 is realized by a processing circuit. That is, the sound output control device 2 includes a processing circuit for executing the processes of step ST1 and step ST2 shown in FIG. The processing circuit may be dedicated hardware or a CPU (Central Processing Unit) that executes a program stored in the memory.
 処理回路が図2Aに示す専用のハードウェアである場合、処理回路107は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)またはこれらを組み合わせたものが該当する。推定部10の機能と制御部11の機能を別々の処理回路で実現してもよいし、これらの機能をまとめて1つの処理回路で実現してもよい。 When the processing circuit is the dedicated hardware shown in FIG. 2A, the processing circuit 107 includes, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), FPGA ( Field-Programmable Gate Array) or a combination thereof. The function of the estimation unit 10 and the function of the control unit 11 may be realized by separate processing circuits, or these functions may be realized by a single processing circuit.
 また、処理回路が図2Bに示すCPU108である場合、推定部10および制御部11の各機能は、ソフトウェア、ファームウェア、またはソフトウェアとファームウェアとの組み合わせによって実現される。ソフトウェアまたはファームウェアはプログラムとして記述され、メモリ109に記憶される。
 CPU108は、メモリ109に記憶されたプログラムを読み出して実行することで、各部の機能を実現する。すなわち、音出力制御装置2は、CPU108により実行されるとき、図3に示すステップST1の処理とステップST2の処理とが結果的に実行されるプログラムを記憶するためのメモリ109を備える。
 これらのプログラムは、推定部10および制御部11の手順または方法をコンピュータに実行させるものである。
When the processing circuit is the CPU 108 shown in FIG. 2B, the functions of the estimation unit 10 and the control unit 11 are realized by software, firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in the memory 109.
The CPU 108 reads out and executes the program stored in the memory 109, thereby realizing the functions of the respective units. That is, the sound output control device 2 includes a memory 109 for storing a program that, when executed by the CPU 108, results in the process of step ST1 and the process of step ST2 shown in FIG.
These programs cause a computer to execute the procedures or methods of the estimation unit 10 and the control unit 11.
 メモリ109には、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically-EPROM)などの不揮発性または揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVDなどが該当する。 The memory 109 may be, for example, a nonvolatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (Electrically-EPROM), or a volatile memory. Magnetic disks, flexible disks, optical disks, compact disks, mini disks, DVDs, and the like are applicable.
 なお、推定部10および制御部11の各機能について一部を専用のハードウェアで実現し、一部をソフトウェアまたはファームウェアで実現してもよい。
 例えば、推定部10については専用のハードウェアとしての処理回路でその機能を実現し、制御部11については、CPU108がメモリ109に記憶されたプログラムを読み出して実行することによってその機能を実現してもよい。
 このように、処理回路は、ハードウェア、ソフトウェア、ファームウェアまたはこれらの組み合わせによって上記機能のそれぞれを実現することができる。
Note that a part of each function of the estimation unit 10 and the control unit 11 may be realized by dedicated hardware, and a part may be realized by software or firmware.
For example, the function of the estimation unit 10 is realized by a processing circuit as dedicated hardware, and the function of the control unit 11 is realized by the CPU 108 reading and executing a program stored in the memory 109. Also good.
Thus, the processing circuit can realize each of the above functions by hardware, software, firmware, or a combination thereof.
 次に動作について説明する。
 図3は、実施の形態1に係る音出力制御方法を示すフローチャートであり、乗員ごとの車内の雰囲気に基づいて音出力を乗員ごとに制御する一連の処理を示している。
 まず、推定部10が、車内の雰囲気を乗員ごとに推定する(ステップST1)。
 次に、制御部11は、乗員ごとの車内の雰囲気に基づいて、音出力装置3a~3dの音出力を乗員ごとに制御する(ステップST2)。これにより、乗員ごとの車内の雰囲気に基づいて音出力が乗員ごとに制御されるので、車両の乗員ごとに適した音量で音出力することができる。
Next, the operation will be described.
FIG. 3 is a flowchart showing the sound output control method according to Embodiment 1, and shows a series of processes for controlling the sound output for each occupant based on the atmosphere in the vehicle for each occupant.
First, the estimation unit 10 estimates the atmosphere in the vehicle for each occupant (step ST1).
Next, the control unit 11 controls the sound output of the sound output devices 3a to 3d for each occupant based on the atmosphere in the vehicle for each occupant (step ST2). Thereby, since the sound output is controlled for each occupant based on the atmosphere in the vehicle for each occupant, the sound can be output at a volume suitable for each occupant of the vehicle.
 次に、推定部10の機能構成を説明する。
 図4は、実施の形態1に係る音出力制御装置2の構成を示すブロック図であり、音出力を制御する構成のみを示している。図4に示すように、推定部10は、取得部15、解析部16および雰囲気推定部17を備える。
 取得部15は、車内の雰囲気の推定に用いる乗員情報を乗員ごとに取得する。例えば、取得部15は、車内に乗員が着座する座席ごとのに設けられた音入力装置4a~4dから入力音信号を乗員情報として取得する。
Next, the functional configuration of the estimation unit 10 will be described.
FIG. 4 is a block diagram showing the configuration of the sound output control device 2 according to Embodiment 1, and shows only the configuration for controlling the sound output. As illustrated in FIG. 4, the estimation unit 10 includes an acquisition unit 15, an analysis unit 16, and an atmosphere estimation unit 17.
The acquisition part 15 acquires the passenger | crew information used for estimation of the atmosphere in a vehicle for every passenger | crew. For example, the acquisition unit 15 acquires an input sound signal as occupant information from the sound input devices 4a to 4d provided for each seat where an occupant sits in the vehicle.
 解析部16は、取得部15によって取得された上記乗員情報を解析して車内の雰囲気の推定の基準となる情報を求める。例えば、解析部16は、乗員の音信号を音響解析して、会話の内容、声のトーンおよび発話頻度のいずれかを求める。 The analysis unit 16 analyzes the occupant information acquired by the acquisition unit 15 and obtains information serving as a reference for estimating the atmosphere in the vehicle. For example, the analysis unit 16 acoustically analyzes an occupant's sound signal to obtain one of conversation content, voice tone, and utterance frequency.
 雰囲気推定部17は、解析部16の解析結果に基づいて、車内の雰囲気を乗員ごとに推定する。例えば、雰囲気推定部17は、乗員ごとの音信号から、会話の内容、声のトーンおよび発話頻度に基づいて車内の雰囲気を乗員ごとに推定する。 The atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16. For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency.
 制御部11は、推定された乗員ごとの車内の雰囲気に基づいて、出力音量を乗員ごとに決定し、決定した出力音量で音出力するように音出力装置3a~3dを制御する。
 例えば、会話に積極的な乗員に対応する音出力装置の出力音量を上げ、会話に積極的ではない乗員に対応する音出力装置の出力音量を下げる。また、眠っている乗員に対応する音出力装置の音量を0(出力停止)にしてもよい。
The control unit 11 determines the output volume for each occupant based on the estimated atmosphere in the vehicle for each occupant, and controls the sound output devices 3a to 3d so as to output sound at the determined output volume.
For example, the output volume of a sound output device corresponding to an occupant who is active in conversation is increased, and the output volume of a sound output device corresponding to an occupant who is not active in conversation is decreased. Further, the volume of the sound output device corresponding to the sleeping passenger may be set to 0 (output stop).
 前述した処理の他に、取得部15は、撮影装置8によって撮影された車内の画像情報を取得してもよい。この場合、解析部16は、車内の画像情報を画像解析して、乗員の顔の表情および仕草のいずれかを求める。雰囲気推定部17は、乗員の顔の表情および仕草に基づいて、車内の雰囲気を乗員ごとに推定する。
 また、雰囲気推定部17は、音入力装置4a~4dで取得された入力音信号の音響解析結果と車内の画像情報の画像解析結果とに基づいて、乗員ごとの車内の雰囲気を推定してもよい。
In addition to the processing described above, the acquisition unit 15 may acquire in-vehicle image information captured by the imaging device 8. In this case, the analysis unit 16 performs image analysis on the image information in the vehicle to obtain either the facial expression or the gesture of the passenger. The atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the facial expression and gesture of the occupant.
The atmosphere estimation unit 17 also estimates the atmosphere in the vehicle for each occupant based on the acoustic analysis results of the input sound signals acquired by the sound input devices 4a to 4d and the image analysis result of the image information in the vehicle. Good.
 図5は、推定部10の動作を示すフローチャートであり、図3のステップST1に相当する処理の詳細を示している。図6は、実施の形態1に係る音出力制御システム1の配置例を示す図であり、音出力制御システム1を車両200に配置した場合を示している。
 図6に示す例では、運転者である乗員Aに対応して音入力装置4aが設けられ、助手席の乗員Bに対応して音入力装置4bが設けられ、後部座席の乗員Cに対応して音入力装置4cが設けられ、後部座席の乗員Dに対応して音入力装置4dが設けられている。
 以下、図6に示す音出力制御システム1における推定部10の動作について説明する。
FIG. 5 is a flowchart showing the operation of the estimation unit 10 and shows the details of the processing corresponding to step ST1 of FIG. FIG. 6 is a diagram showing an arrangement example of the sound output control system 1 according to the first embodiment, and shows a case where the sound output control system 1 is arranged in the vehicle 200.
In the example shown in FIG. 6, the sound input device 4a is provided corresponding to the occupant A who is the driver, the sound input device 4b is provided corresponding to the occupant B of the front passenger seat, and the occupant C of the rear seat is provided. The sound input device 4c is provided, and the sound input device 4d is provided corresponding to the occupant D of the rear seat.
Hereinafter, the operation of the estimation unit 10 in the sound output control system 1 shown in FIG. 6 will be described.
 取得部15が、車内の雰囲気の推定に用いる乗員情報を乗員ごとに取得する(ステップST1a)。例えば、取得部15は、音出力制御システム1の起動中は常に、あるいは、トリガとなるイベントが発生してから周期的にまたは常に、音入力装置4a~4dに入力された音信号を取得する。これによって、乗員ごとの座席周辺の音または乗員ごとの発話音声が音信号として取得される。イベントとしては、車内通話の開始操作およびメディア再生の開始操作といった音出力が発生する発端となる操作が挙げられる。 The acquisition unit 15 acquires occupant information used for estimating the atmosphere in the vehicle for each occupant (step ST1a). For example, the acquisition unit 15 acquires sound signals input to the sound input devices 4a to 4d at all times during activation of the sound output control system 1 or periodically or always after an event that serves as a trigger occurs. . Thereby, the sound around the seat for each occupant or the uttered voice for each occupant is acquired as a sound signal. Examples of the event include an operation that starts sound generation such as an in-car call start operation and a media playback start operation.
 次に、解析部16が、取得部15によって取得された乗員情報を解析して、乗員ごとの車内の雰囲気の推定基準となる情報を求める(ステップST2a)。
 例えば、解析部16が、音信号を音響解析して発話音声を抽出し、この音声を音声認識して、特定のキーワードの有無を解析した結果を、乗員ごとの会話の内容として求める。特定のキーワードは、乗員同士の会話の盛り上がり度合いを示す基準となるキーワードであり、“面白い”、“興味ない”といったものが挙げられる。
 また、解析部16は、音信号を音響解析して乗員の発話音声のトーンを求めてもよく、発話音声の大きさを求めてもよい。
 さらに、解析部16は、音信号を音響解析して発話音声を抽出することができた回数を乗員ごとにカウントし、カウント回数から乗員ごとの発話頻度を求めてもよい。
Next, the analysis part 16 analyzes the passenger | crew information acquired by the acquisition part 15, and calculates | requires the information used as the estimation reference | standard of the atmosphere in the vehicle for every passenger | crew (step ST2a).
For example, the analysis unit 16 acoustically analyzes the sound signal to extract the utterance voice, recognizes the voice, and obtains the result of analyzing the presence / absence of a specific keyword as the content of the conversation for each occupant. The specific keyword is a keyword that serves as a reference indicating the degree of excitement between the passengers, and includes “interesting” and “not interested”.
The analysis unit 16 may acoustically analyze the sound signal to obtain the tone of the occupant's utterance voice, or may obtain the magnitude of the utterance voice.
Furthermore, the analysis unit 16 may count the number of times the utterance voice can be extracted by acoustic analysis of the sound signal for each occupant, and obtain the utterance frequency for each occupant from the number of counts.
 雰囲気推定部17は、解析部16の解析結果に基づいて車内の雰囲気を乗員ごとに推定する(ステップST3a)。例えば、キーワードと車内の雰囲気を示す情報とを対応付けたテーブルデータを記憶部7に記憶しておく。雰囲気推定部17は、記憶部7から読み出した上記テーブルデータを参照して、音信号から抽出されたキーワードに対応する雰囲気を推定する。乗員の音声に“面白い”というキーワードが含まれる場合、この乗員は“楽しい”雰囲気であると推定される。また、雰囲気推定部17は、発話音声のトーンが閾値より高い場合、積極的に会話して“楽しい”雰囲気であると推定し、発話音声のトーンが閾値以下であれば、積極的に会話しておらず、“つまらない”雰囲気であると推定してもよい。さらに、雰囲気推定部17は、発話頻度が閾値以下である場合、乗員が会話に参加していないか、眠っていると判断して、“静かな”雰囲気であると推定してもよい。 The atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST3a). For example, table data in which keywords and information indicating the atmosphere in the vehicle are associated is stored in the storage unit 7. The atmosphere estimation unit 17 estimates the atmosphere corresponding to the keyword extracted from the sound signal with reference to the table data read from the storage unit 7. If the passenger's voice includes the keyword “interesting”, the passenger is estimated to have a “fun” atmosphere. In addition, the atmosphere estimation unit 17 presumes that the utterance voice tone is higher than the threshold value and actively talks and assumes that the atmosphere is “fun”, and if the utterance voice tone is equal to or less than the threshold value, the atmosphere estimation unit 17 actively talks. It may be presumed that the atmosphere is “bottom”. Further, when the utterance frequency is equal to or lower than the threshold value, the atmosphere estimation unit 17 may determine that the occupant is not participating in the conversation or is sleeping, and may estimate that the atmosphere is “quiet”.
 なお、取得部15は、音入力装置4a~4dのそれぞれに入力された音信号に加えて、またはその代わりに、撮影装置8によって撮影された車内の画像情報を取得してもよい。この画像情報には、車内の全ての乗員が撮影されているものとする。
 解析部16は、取得部15から車内の画像情報を入力すると、この画像情報を画像解析して乗員ごとの画像領域を抽出し、乗員の画像領域を画像解析して乗員の顔の表情および仕草のいずれかを求める。例えば、画像から解析した乗員の顔画像と顔の表情の画像パターンとを比較することにより、乗員の顔の表情を特定することができる。同様な方法で乗員の仕草の特定も可能である。なお、ここで特定される仕草は、会話に対する乗員の積極性を推定するための基準となる仕草であり、“スマートホンを操作している”および“窓側に顔を向けている”、といったものが挙げられる。
 雰囲気推定部17は、顔の表情および仕草と雰囲気を示す情報とを対応付けたテーブルデータを参照して、画像情報から特定された顔の表情および仕草に対応する雰囲気を推定する。例えば、表情が笑顔であれば、この乗員は、“楽しい”雰囲気であると推定され、伏し目の状態であれば、この乗員は、“つまらない”雰囲気であると推定される。
The acquisition unit 15 may acquire in-vehicle image information captured by the imaging device 8 in addition to or instead of the sound signals input to the sound input devices 4a to 4d. It is assumed that all the passengers in the vehicle are photographed in this image information.
When the image information in the vehicle is input from the acquisition unit 15, the analysis unit 16 analyzes the image information to extract an image area for each occupant, analyzes the occupant image area, analyzes the occupant's facial expressions, and gestures. Ask for either. For example, the facial expression of the occupant can be identified by comparing the occupant's face image analyzed from the image with the facial expression image pattern. It is possible to identify the occupant's gesture in the same way. The gestures identified here are the gestures that serve as the standard for estimating the occupant's aggressiveness with respect to conversations, such as “operating a smartphone” and “facing the face to the window”. Can be mentioned.
The atmosphere estimation unit 17 estimates the atmosphere corresponding to the facial expression and gesture specified from the image information with reference to the table data in which the facial expression and gesture and the information indicating the atmosphere are associated with each other. For example, if the facial expression is a smile, this occupant is presumed to be in a “fun” atmosphere, and if the expression is lying down, the occupant is presumed to be in a “dull” atmosphere.
 また、雰囲気推定部17は、音信号の解析結果と画像情報の解析結果とを相補的に考慮して、乗員の最終的な雰囲気を推定してもよい。例えば、発話頻度が閾値以下であるが、表情が笑っている場合、この乗員は、眠っているのではなく、会話またはメディア再生していると判断され、“楽しい”雰囲気であると推定される。 Further, the atmosphere estimation unit 17 may estimate the final atmosphere of the occupant by considering the analysis result of the sound signal and the analysis result of the image information in a complementary manner. For example, if the utterance frequency is below the threshold but the facial expression is laughing, it is determined that the occupant is talking or playing media, not sleeping, and is presumed to have a “fun” atmosphere .
 図5に示した処理が完了すると、制御部11は、推定部10により推定された乗員ごとの車内の雰囲気に基づいて出力音量を乗員ごとに決定し、決定した出力音量で音出力するように音出力装置を制御する。
 例えば、制御部11は、音出力装置ごとの基準音量に対して、車内の雰囲気ごとに設定された重みを乗算することにより、乗員ごとの音出力装置の出力音量を算出する。
 また、車内の雰囲気と音出力装置の音量とを対応付けたテーブルデータを記憶部7に記憶しておき、制御部11が、記憶部7から読み出した上記テーブルデータを参照して、乗員ごとの音出力装置の出力音量を選択してもよい。
 さらに、制御部11は、乗員ごとの車内の雰囲気と車内の乗員同士の位置関係とを考慮して、車内の雰囲気に基づいて求めた乗員ごとの出力音量を補正する。
 例えば、乗員xの隣に着座している乗員yに対応する音出力装置の音量を上げる場合、その上昇分が、乗員xが起きている場合よりも眠っている場合(“静かな”雰囲気)の方が小さくなるように補正される。
When the processing shown in FIG. 5 is completed, the control unit 11 determines the output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10, and outputs the sound with the determined output volume. Control the sound output device.
For example, the control unit 11 calculates the output volume of the sound output device for each occupant by multiplying the reference volume for each sound output device by the weight set for each atmosphere in the vehicle.
Further, table data in which the atmosphere in the vehicle and the volume of the sound output device are associated with each other is stored in the storage unit 7, and the control unit 11 refers to the table data read from the storage unit 7, for each occupant. The output volume of the sound output device may be selected.
Further, the control unit 11 corrects the output volume for each occupant obtained based on the atmosphere in the vehicle in consideration of the atmosphere in the vehicle for each passenger and the positional relationship between the passengers in the vehicle.
For example, when the volume of the sound output device corresponding to the occupant y seated next to the occupant x is increased, the increase is more asleep than when the occupant x is awake ("quiet" atmosphere) Is corrected to be smaller.
 車内状況A1として、運転席の乗員Aと後部座席の乗員Dが車内通話で会話し、助手席の乗員Bは車内テレビを視聴し、後部座席の乗員Cは眠っている場合を例に挙げる。
 また、乗員Dは積極的に会話しているが、乗員Aは運転に集中して積極的に会話していない状態である。会話に積極的な乗員Dは“楽しい”雰囲気であると推定され、会話に積極的ではない乗員Aは“つまらない”雰囲気であると推定される。
 さらに、車内テレビを視聴している乗員Bは“楽しい”雰囲気であると推定され、眠っている乗員Cは“静かな”雰囲気であると推定される。
As an example of the in-vehicle situation A1, an occupant A in the driver's seat and an occupant D in the rear seat have a conversation by in-car call, an occupant B in the front passenger seat watches the in-car television, and an occupant C in the rear seat is asleep.
Moreover, although the passenger | crew D is talking actively, the passenger | crew A is the state which concentrates on driving | operation and is not talking actively. The passenger D who is active in the conversation is presumed to have a “fun” atmosphere, and the passenger A who is not active in the conversation is presumed to have a “dull” atmosphere.
Further, it is estimated that the passenger B watching the in-car television has a “fun” atmosphere, and the sleeping passenger C is estimated to have a “quiet” atmosphere.
 車内状況A1において、制御部11は、“つまらない”雰囲気であると推定された乗員Aに対応する音出力装置3aの出力音量を下げる。また、制御部11は、“楽しい”雰囲気と推定された乗員Bに対応する音出力装置3bの出力音量を上昇させ、“楽しい”雰囲気と推定された乗員Dに対応する音出力装置3dの出力音量を上昇させる。さらに、制御部11は、“静かな”雰囲気と推定された乗員Cに対応する音出力装置3cの出力音量を0(出力停止)とする。
 なお、乗員Dは“静かな”雰囲気の乗員Cと隣り合っているので、制御部11は、音出力装置3dの出力音量の上昇分を、乗員Bに対応する音出力装置3bの出力音量の上昇分よりも小さくなるように補正してもよい。これにより、車両200の乗員A~Dに適した音量で音出力することができる。
In the in-vehicle situation A1, the control unit 11 decreases the output volume of the sound output device 3a corresponding to the passenger A who is estimated to have a “bottom” atmosphere. Further, the control unit 11 increases the output volume of the sound output device 3b corresponding to the occupant B estimated to be the “fun” atmosphere, and the output of the sound output device 3d corresponding to the occupant D estimated to be the “fun” atmosphere. Increase the volume. Further, the control unit 11 sets the output volume of the sound output device 3c corresponding to the occupant C estimated to be a “quiet” atmosphere to 0 (output stop).
Since the occupant D is adjacent to the occupant C having a “quiet” atmosphere, the control unit 11 determines the increase in the output volume of the sound output device 3d as the output volume of the sound output device 3b corresponding to the occupant B. You may correct | amend so that it may become smaller than a raise. As a result, sound can be output at a volume suitable for the passengers A to D of the vehicle 200.
 また、制御部11が、車内が撮影された画像信号に基づいて、車内の乗員の有無を判定し、車内に複数の乗員がいると判定した場合に、図5の処理を開始するように制御してもよい。上記判定には、座席に設けられた重量センサなどのセンサ情報を用いてもよい。
 これにより、音出力制御装置2の電力消費を抑えることができる。
Further, when the control unit 11 determines the presence / absence of an occupant in the vehicle based on an image signal obtained by photographing the inside of the vehicle, the control unit 11 performs control so as to start the process of FIG. 5 when determining that there are a plurality of occupants in the vehicle. May be. Sensor information such as a weight sensor provided in the seat may be used for the determination.
Thereby, the power consumption of the sound output control device 2 can be suppressed.
 図7は、音出力制御システム1の他の配置例を示す図であり、音出力制御システム1を車両200に配置した場合を示している。図7に示す例では、乗員A~Dが着座する座席に対応して音入力装置4a~4dが設けられ、後部座席の乗員Dが携帯端末5を所持している。ここで、音出力制御装置2および携帯端末5を介して乗員Aと乗員Dとが行う車内通話の音出力の制御について説明する。 FIG. 7 is a diagram showing another arrangement example of the sound output control system 1, and shows a case where the sound output control system 1 is arranged in the vehicle 200. In the example shown in FIG. 7, the sound input devices 4a to 4d are provided corresponding to the seats on which the occupants A to D are seated, and the occupant D in the rear seat carries the portable terminal 5. Here, the control of the sound output of the in-vehicle call performed by the occupant A and the occupant D via the sound output control device 2 and the portable terminal 5 will be described.
 図8は、携帯端末5による通話処理を示すフローチャートであり、携帯端末5を用いた車内通話における一連の処理を示している。
 乗員Dの開始操作に応じて、携帯端末5が、音出力制御装置2を探索して、音出力制御装置2が検出されたか否かを確認する(ステップST1b)。音出力制御装置2が検出されなければ(ステップST1b;NO)、携帯端末5は、音出力制御装置2の探索を繰り返す。音出力制御装置2が検出されると(ステップST1b;YES)、携帯端末5は、音出力制御装置2との通信接続を確立して、携帯端末5の位置情報を送信する(ステップST2b)。携帯端末5の位置情報は、乗員を特定するための情報であり、乗員Dが着座する座席を示す情報であってもよい。
 続いて、携帯端末5は、乗員Dが着座する後部座席周辺の音を集音し、この入力音信号を音出力制御装置2に送信する(ステップST3b)。
FIG. 8 is a flowchart showing call processing by the mobile terminal 5, and shows a series of processing in in-car call using the mobile terminal 5.
In response to the start operation of the occupant D, the mobile terminal 5 searches the sound output control device 2 to check whether the sound output control device 2 has been detected (step ST1b). If the sound output control device 2 is not detected (step ST1b; NO), the portable terminal 5 repeats the search for the sound output control device 2. When the sound output control device 2 is detected (step ST1b; YES), the mobile terminal 5 establishes a communication connection with the sound output control device 2 and transmits position information of the mobile terminal 5 (step ST2b). The position information of the portable terminal 5 is information for specifying the occupant, and may be information indicating a seat on which the occupant D is seated.
Subsequently, the mobile terminal 5 collects sound around the rear seat on which the occupant D is seated, and transmits this input sound signal to the sound output control device 2 (step ST3b).
 この後、携帯端末5は、音出力制御装置2を介した車内通話を開始する(ステップST4b)。この車内通話において、乗員Aの発話音声は、音入力装置4に入力されて、携帯端末5のスピーカから音出力され、乗員Dの発話音声は、携帯端末5のマイクに入力されて、音出力装置3aから音出力される。携帯端末5には、音出力制御装置2から制御信号が送信され、携帯端末5のスピーカの出力音量は、この制御信号に応じた音量となる。
 携帯端末5は、車内通話が終了したか否かを確認する(ステップST5b)。終了していなければ(ステップST5b;NO)、ステップST4bに戻る。通話が終了していれば(ステップST5b;YES)、図8の処理が終了される。
Thereafter, the portable terminal 5 starts an in-car call via the sound output control device 2 (step ST4b). In this in-car call, the utterance voice of the occupant A is input to the sound input device 4 and output from the speaker of the portable terminal 5, and the utterance voice of the occupant D is input to the microphone of the portable terminal 5 and output as sound. Sound is output from the device 3a. A control signal is transmitted from the sound output control device 2 to the portable terminal 5, and the output volume of the speaker of the portable terminal 5 becomes a volume corresponding to this control signal.
The portable terminal 5 checks whether or not the in-car call has ended (step ST5b). If not completed (step ST5b; NO), the process returns to step ST4b. If the call has ended (step ST5b; YES), the process of FIG. 8 is ended.
 図9は、音出力制御装置2による携帯端末5の音出力の制御処理を示すフローチャートであり、携帯端末5を用いた車内通話における一連の処理を示している。
 音出力制御装置2の取得部15は、通信I/F部14を介して携帯端末5を探索して、携帯端末5が検出されたか否かを確認する(ステップST1c)。携帯端末5が検出されなければ(ステップST1c;NO)、取得部15は、携帯端末5の探索を繰り返す。
FIG. 9 is a flowchart showing a sound output control process of the mobile terminal 5 by the sound output control device 2, and shows a series of processes in an in-car call using the mobile terminal 5.
The acquisition unit 15 of the sound output control device 2 searches the mobile terminal 5 via the communication I / F unit 14 and confirms whether the mobile terminal 5 has been detected (step ST1c). If the mobile terminal 5 is not detected (step ST1c; NO), the acquisition unit 15 repeats the search for the mobile terminal 5.
 携帯端末5が検出されると(ステップST1c;YES)、通信I/F部14が、携帯端末5との通信接続を確立して携帯端末5の位置情報を受信する(ステップST2c)。続いて、通信I/F部14は、携帯端末5から上記入力音信号を受信する(ステップST3c)。取得部15は、通信I/F部14によって受信された携帯端末5の位置情報と、入力音信号とを取得して解析部16に出力する。 When the mobile terminal 5 is detected (step ST1c; YES), the communication I / F unit 14 establishes a communication connection with the mobile terminal 5 and receives the position information of the mobile terminal 5 (step ST2c). Subsequently, the communication I / F unit 14 receives the input sound signal from the portable terminal 5 (step ST3c). The acquisition unit 15 acquires the position information of the mobile terminal 5 and the input sound signal received by the communication I / F unit 14 and outputs them to the analysis unit 16.
 雰囲気推定部17は、解析部16の解析結果に基づいて車内の雰囲気を乗員ごとに推定する(ステップST4c)。
 なお、取得部15は、車内通話が開始されると、携帯端末5から受信した音信号に加えて、音入力装置4a~4dに入力された音信号も取得して解析部16に出力する。解析部16は、携帯端末5および音入力装置4dからの入力音信号を音響解析して、会話の内容、声のトーンおよび発話頻度のいずれかを求める。同様に、解析部16は、音入力装置4a~4cに入力された音信号を音響解析する。これらの解析結果に基づいて、雰囲気推定部17は、乗員A~Dの車内の雰囲気を推定する。
The atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST4c).
When the in-car call is started, the acquisition unit 15 acquires the sound signals input to the sound input devices 4a to 4d in addition to the sound signal received from the mobile terminal 5, and outputs the acquired sound signals to the analysis unit 16. The analysis unit 16 acoustically analyzes input sound signals from the mobile terminal 5 and the sound input device 4d to obtain one of conversation content, voice tone, and utterance frequency. Similarly, the analysis unit 16 performs acoustic analysis on the sound signals input to the sound input devices 4a to 4c. Based on these analysis results, the atmosphere estimation unit 17 estimates the atmosphere inside the passengers A to D.
 制御部11は、雰囲気推定部17によって推定された乗員A~Dの車内の雰囲気に基づいて、通話音量を制御する(ステップST5c)。
 例えば、制御部11は、乗員Aの車内の雰囲気に応じた出力音量を決定すると、乗員Aの隣に着座している乗員Bの車内の雰囲気を考慮して上記出力音量を補正してから、補正後の出力音量で音出力するように音出力装置3aを制御する。
 制御部11は、乗員Dの車内の雰囲気に応じた出力音量を決定すると、乗員Dの隣に着座している乗員Cの車内の雰囲気を考慮して上記出力音量を補正してから、補正後の出力音量で音出力させるための制御信号を作成する。制御信号は、通信I/F部14によって携帯端末5に送信される。
The control unit 11 controls the call volume based on the atmosphere inside the passengers A to D estimated by the atmosphere estimation unit 17 (step ST5c).
For example, when the control unit 11 determines the output volume according to the atmosphere in the passenger A's vehicle, the controller 11 corrects the output volume in consideration of the atmosphere in the passenger B sitting next to the passenger A. The sound output device 3a is controlled to output sound at the corrected output volume.
When the control unit 11 determines the output sound volume corresponding to the occupant D's interior atmosphere, the control unit 11 corrects the output volume in consideration of the interior of the occupant C seated next to the occupant D, and then corrects the output volume. Create a control signal to output sound at the output volume. The control signal is transmitted to the mobile terminal 5 by the communication I / F unit 14.
 制御部11は、車内通話が終了したか否かを確認する(ステップST6c)。終了していなければ(ステップST6c;NO)、ステップST4cに戻る。通話が終了していれば(ステップST6c;YES)、図9の処理が終了される。
 なお、前述したように携帯端末5を制御対象とする場合、携帯端末5を使用する乗員については、車載スピーカおよび車載マイクを設ける必要がない。従って、乗員全てが携帯端末5を備え、これらの音出力を制御する場合、車載スピーカおよび車載マイクが不要となるので、音出力制御システム1の導入に必要なコストを削減することができる。
The control unit 11 confirms whether or not the in-car call has ended (step ST6c). If not completed (step ST6c; NO), the process returns to step ST4c. If the call has ended (step ST6c; YES), the process of FIG. 9 is ended.
As described above, when the mobile terminal 5 is a control target, it is not necessary to provide an in-vehicle speaker and an in-vehicle microphone for an occupant who uses the mobile terminal 5. Therefore, when all the occupants are provided with the portable terminal 5 and control these sound outputs, the vehicle-mounted speaker and the vehicle-mounted microphone are not required, so that the cost required for introducing the sound output control system 1 can be reduced.
 以上のように、実施の形態1に係る音出力制御装置2は、車両200の乗員ごとに車内の雰囲気を推定する推定部10と、推定部10によって推定された乗員ごとの車内の雰囲気に基づいて、音出力を乗員ごとに制御する制御部11とを備える。制御部11は、車両200の乗員同士で行われる車内通話の音出力を制御し、メディアの再生音の出力を制御する。これらの構成によって音出力が乗員ごとに制御されるので、車両200の乗員ごとに適した音量で音出力することができる。 As described above, the sound output control device 2 according to the first embodiment is based on the estimation unit 10 that estimates the atmosphere in the vehicle for each occupant of the vehicle 200, and the atmosphere in the vehicle for each occupant estimated by the estimation unit 10. And a control unit 11 that controls sound output for each occupant. The control part 11 controls the sound output of the in-vehicle call performed between the passengers of the vehicle 200, and controls the output of the reproduction sound of the media. Since the sound output is controlled for each occupant by these configurations, the sound can be output at a volume suitable for each occupant of the vehicle 200.
実施の形態2.
 図10は、この発明の実施の形態2に係る音出力制御装置2Aの構成を示すブロック図である。図10において、図4と同一構成要素には同一符号を付して説明を省略する。
 音出力制御装置2Aは、音出力を制御する構成として推定部10Aおよび制御部11Aを備える。推定部10Aは、取得部15A、解析部16、雰囲気推定部17およびノイズ推定部18を備える。
Embodiment 2. FIG.
FIG. 10 is a block diagram showing a configuration of a sound output control device 2A according to Embodiment 2 of the present invention. In FIG. 10, the same components as those in FIG.
The sound output control device 2A includes an estimation unit 10A and a control unit 11A as a configuration for controlling sound output. The estimation unit 10A includes an acquisition unit 15A, an analysis unit 16, an atmosphere estimation unit 17, and a noise estimation unit 18.
 取得部15Aは、車内の雰囲気の推定および車内ノイズの推定のそれぞれに用いる情報を乗員ごとに取得する。例えば、取得部15Aは、実施の形態1と同様に、車内の音入力装置で入力された音信号と、撮影装置8によって撮影された車内の画像情報とを、乗員情報として取得する。取得部15Aは、車両情報取得部6によって取得された車両情報を、車内ノイズの推定に用いる情報として取得する。車両情報は、車両の状態および車両の運転操作に関する情報であって、例えば、窓の開度、エンジン回転数、車速、アクセル開度および車両の位置情報が含まれる。 The acquisition unit 15A acquires information used for estimation of the atmosphere in the vehicle and estimation of the noise in the vehicle for each occupant. For example, the acquisition unit 15A acquires, as occupant information, the sound signal input by the sound input device in the vehicle and the image information in the vehicle photographed by the photographing device 8 as in the first embodiment. The acquisition unit 15A acquires the vehicle information acquired by the vehicle information acquisition unit 6 as information used for estimation of in-vehicle noise. The vehicle information is information related to the state of the vehicle and the driving operation of the vehicle, and includes, for example, window opening, engine speed, vehicle speed, accelerator opening, and vehicle position information.
 ノイズ推定部18は、車両情報および音入力装置に入力された音信号の少なくとも一方に基づいて、乗員ごとに車内ノイズを推定する。例えば、ノイズ推定部18は、車両情報から特定した車両の走行状態および走行場所を用いてロードノイズレベルを推定し、乗員近傍の窓の開度を用いて風切り音レベルを推定する。ノイズ推定部18は、ロードノイズレベルに風切り音レベルを加味したノイズを乗員ごとの車内ノイズとする。
 また、ノイズ推定部18は、音入力装置に入力された音信号から、取得部15によって取得された発話音声およびメディア再生音の音レベルを除いた信号レベルを、乗員ごとの車内ノイズレベルとしてもよい。
The noise estimation unit 18 estimates vehicle interior noise for each occupant based on at least one of the vehicle information and the sound signal input to the sound input device. For example, the noise estimation unit 18 estimates the road noise level using the traveling state and traveling location of the vehicle specified from the vehicle information, and estimates the wind noise level using the opening of the window near the passenger. The noise estimation unit 18 uses the noise obtained by adding the wind noise level to the road noise level as the in-vehicle noise for each occupant.
Further, the noise estimation unit 18 uses the signal level obtained by removing the sound level of the utterance voice and the media reproduction sound acquired by the acquisition unit 15 from the sound signal input to the sound input device as the vehicle interior noise level for each occupant. Good.
 制御部11Aは、雰囲気推定部17によって推定された乗員ごとの車内の雰囲気およびノイズ推定部18によって推定された乗員ごとの車内ノイズに基づいて、乗員ごとの音出力装置3a~3dの音出力を制御する。例えば、制御部11Aは、音出力装置の出力音量と車内の雰囲気および車内ノイズとを対応付けたテーブルデータから、乗員ごとの音出力装置の出力音量を選択してもよい。 The control unit 11A outputs the sound output of the sound output devices 3a to 3d for each occupant based on the vehicle interior for each occupant estimated by the atmosphere estimation unit 17 and the vehicle interior noise for each occupant estimated by the noise estimation unit 18. Control. For example, the control unit 11A may select the output volume of the sound output device for each occupant from the table data in which the output volume of the sound output device is associated with the vehicle interior and the vehicle interior noise.
 音出力制御装置2Aにおける推定部10Aおよび制御部11Aの各機能は、処理回路によって実現される。すなわち、音出力制御装置2Aは、これらの機能を実行するための処理回路を備える。処理回路は、図2Aおよび図2Bに示したように、専用のハードウェアであっても、メモリに格納されるプログラムを実行するCPUであってもよい。 Each function of the estimation unit 10A and the control unit 11A in the sound output control device 2A is realized by a processing circuit. That is, the sound output control device 2A includes a processing circuit for executing these functions. As shown in FIGS. 2A and 2B, the processing circuit may be dedicated hardware or a CPU that executes a program stored in a memory.
 次に動作について説明する。
 図11は、推定部10Aの動作を示すフローチャートであり、図3のステップST1に相当する処理の詳細を示している。
 まず、取得部15Aが、車内の雰囲気の推定に用いる乗員情報と、車内ノイズの推定に用いる車両情報とを取得する(ステップST1d)。例えば、取得部15Aは、音出力制御システム1の起動中は常に、あるいは、トリガとなるイベントが発生してから周期的にまたは常に、音入力装置に入力された音信号と、車両情報とを取得する。これによって、乗員の座席周辺の音が音信号として取得され、さらに、車両情報が取得される。上記イベントは、実施の形態1と同様に、車内通話の開始操作およびメディア再生の開始操作といった音出力が発生する発端となる操作が挙げられる。また、取得部15Aは、音入力装置に入力された音信号に加えて、またはその代わりに、撮影装置8によって撮影された車内の画像情報を取得してもよい。なお、この画像情報には、車内の全ての乗員が撮影されているものとする。
Next, the operation will be described.
FIG. 11 is a flowchart showing the operation of the estimation unit 10A, and shows details of processing corresponding to step ST1 in FIG.
First, the acquisition unit 15A acquires occupant information used to estimate the atmosphere in the vehicle and vehicle information used to estimate vehicle noise (step ST1d). For example, the acquisition unit 15A obtains the sound signal input to the sound input device and the vehicle information at all times during the activation of the sound output control system 1 or periodically or always after the trigger event occurs. get. As a result, sounds around the passenger's seat are acquired as sound signals, and vehicle information is acquired. As in the case of the first embodiment, the event includes an operation that is a starting point for generating a sound output, such as an in-car call start operation and a media playback start operation. In addition to or instead of the sound signal input to the sound input device, the acquisition unit 15 </ b> A may acquire in-vehicle image information captured by the image capturing device 8. It is assumed that all the passengers in the vehicle are photographed in this image information.
 次に、解析部16が、取得部15Aによって取得された乗員情報を解析して、乗員ごとの車内の雰囲気の推定基準となる情報を求める(ステップST2d)。この処理は、図5のステップST2aと同様である。
 雰囲気推定部17は、解析部16の解析結果に基づいて車内の雰囲気を乗員ごとに推定する(ステップST3d)。この処理も、図5のステップST3aと同様である。
Next, the analysis unit 16 analyzes the occupant information acquired by the acquisition unit 15A, and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant (step ST2d). This process is the same as step ST2a in FIG.
The atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on the analysis result of the analysis unit 16 (step ST3d). This process is also the same as step ST3a in FIG.
 ノイズ推定部18は、車両情報および乗員情報の少なくとも一方に基づいて、乗員ごとに車内ノイズを推定する(ステップST4d)。
 例えば、車両の走行状態および走行場所とロードノイズレベルとを対応付けたロードノイズ用のテーブルデータ、および窓の開度と風切り音レベルとを対応付けた風切り音用のテーブルデータを記憶部7に記憶しておく。
 ノイズ推定部18は、記憶部7から読み出したロードノイズ用のテーブルデータを参照して、車両情報から特定した車両の走行状態および走行場所に対応するロードノイズレベルを推定する。ロードノイズレベルは、車内のノイズレベルのベースレベルとされる。
 続いて、ノイズ推定部18は、記憶部7から読み出した風切り音用のテーブルデータを参照して、乗員近傍の窓の開度に対応する風切り音レベルを乗員ごとに推定する。
 乗員ごとの車内ノイズレベルは、例えば、ロードノイズレベルに風切り音レベルを加味した音レベルとする。
 また、ノイズ推定部18が、車内の座席ごとに設けられた音入力装置4a~4dに入力された音信号の音レベルから、乗員の発話音声またはメディア再生音のレベルを除いた音レベルを、乗員ごとの車内ノイズと推定してもよい。
The noise estimation unit 18 estimates in-vehicle noise for each occupant based on at least one of vehicle information and occupant information (step ST4d).
For example, road noise table data in which the vehicle running state and location and road noise level are associated with each other, and wind noise table data in which the window opening and the wind noise level are associated with each other are stored in the storage unit 7. Remember.
The noise estimation unit 18 refers to the road noise table data read from the storage unit 7 and estimates the road noise level corresponding to the traveling state and the traveling location of the vehicle specified from the vehicle information. The road noise level is a base level of the noise level in the vehicle.
Subsequently, the noise estimation unit 18 refers to the table data for wind noise read from the storage unit 7 and estimates the wind noise level corresponding to the opening degree of the window near the passenger for each passenger.
The vehicle interior noise level for each occupant is, for example, a sound level obtained by adding a wind noise level to a road noise level.
In addition, the noise estimation unit 18 obtains a sound level obtained by removing the level of the uttered voice of the occupant or the media playback sound from the sound level of the sound signal input to the sound input devices 4a to 4d provided for each seat in the vehicle. The in-vehicle noise may be estimated for each passenger.
 図11に示した処理が完了すると、制御部11Aは、推定部10Aによって推定された乗員ごとの車内の雰囲気および車内ノイズに基づいて出力音量を乗員ごとに決定し、決定した出力音量で音出力するように音出力装置を制御する。
 例えば、制御部11Aは、実施の形態1と同様に、音出力装置ごとの基準音量に対して車内の雰囲気ごとに設定された重みを乗算することにより、乗員ごとの音出力装置の出力音量を算出する。また、制御部11Aは、車内の雰囲気と音出力装置の音量とを対応付けたテーブルデータから、乗員ごとの音出力装置の出力音量を選択してもよい。
 次に、制御部11Aは、前述したようにして求めた出力音量が、車内ノイズレベルよりも大きくなるように出力音量を補正する。このとき、制御部11Aは、乗員ごとの車内の雰囲気と、車内の乗員同士の位置関係とを考慮して出力音量を上げる度合いを制御する。
 例えば、乗員xの隣に着座している乗員yに対応する音出力装置の音量を上げる場合、その上昇分が、乗員xが起きている場合よりも眠っている場合(“静かな”雰囲気)の方が小さくなるように補正される。
When the process shown in FIG. 11 is completed, the control unit 11A determines the output volume for each occupant based on the in-vehicle atmosphere and the in-vehicle noise estimated by the estimation unit 10A, and outputs sound with the determined output volume. To control the sound output device.
For example, similarly to the first embodiment, the control unit 11A multiplies the reference volume for each sound output device by the weight set for each atmosphere in the vehicle, thereby setting the output volume of the sound output device for each occupant. calculate. Moreover, 11 A of control parts may select the output volume of the sound output device for every passenger | crew from the table data which matched the atmosphere in a vehicle, and the volume of the sound output device.
Next, the control unit 11A corrects the output volume so that the output volume obtained as described above is larger than the in-vehicle noise level. At this time, the control unit 11A controls the degree of increasing the output volume in consideration of the atmosphere in the vehicle for each occupant and the positional relationship between the occupants in the vehicle.
For example, when the volume of the sound output device corresponding to the occupant y seated next to the occupant x is increased, the increase is more asleep than when the occupant x is awake ("quiet" atmosphere) Is corrected to be smaller.
 また、制御部11Aが、車内が撮影された画像信号に基づいて車内の乗員の有無を判定し、車内に複数の乗員がいると判定した場合に、図11の処理を開始するように制御してもよい。上記判定には、座席に設けられた重量センサなどのセンサ情報を用いてもよい。
 これにより、音出力制御装置2Aの電力消費を抑えることができる。
Further, when the control unit 11A determines the presence or absence of an occupant in the vehicle based on the image signal obtained by photographing the inside of the vehicle, the control unit 11A performs control so as to start the process of FIG. 11 when determining that there are a plurality of occupants in the vehicle. May be. Sensor information such as a weight sensor provided in the seat may be used for the determination.
Thereby, the power consumption of the sound output control device 2A can be suppressed.
 以上のように、実施の形態2に係る音出力制御装置2Aにおいて、制御部11Aが、乗員ごとの車内の雰囲気に加えて、乗員ごとの車内ノイズに基づいて、音出力を乗員ごとに制御する。このように構成しても音出力が乗員ごとに制御されるので、車両の乗員ごとに適した音量で音出力することができる。 As described above, in the sound output control device 2A according to Embodiment 2, the control unit 11A controls the sound output for each occupant based on the in-vehicle noise for each occupant in addition to the in-vehicle atmosphere for each occupant. . Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
実施の形態3.
 図12は、この発明の実施の形態3に係る音出力制御装置2Bの構成を示すブロック図である。図12において、図4および図10と同一構成要素には同一符号を付して説明を省略する。音出力制御装置2Bは、音出力を制御する構成として、推定部10または推定部10Aと制御部11Bとを備える。制御部11Bは、判定部19、履歴情報取得部20および音量制御部21を備える。
Embodiment 3 FIG.
FIG. 12 is a block diagram showing a configuration of a sound output control device 2B according to Embodiment 3 of the present invention. 12, the same components as those in FIGS. 4 and 10 are denoted by the same reference numerals, and the description thereof is omitted. The sound output control device 2B includes an estimation unit 10 or an estimation unit 10A and a control unit 11B as a configuration for controlling sound output. The control unit 11B includes a determination unit 19, a history information acquisition unit 20, and a volume control unit 21.
 判定部19は、車内が撮影された画像信号に基づいて、乗員の乗車経験の有無を判定する。判定対象の乗員は、乗員ごとの音声信号を音響解析した結果によって特定される。
 履歴情報取得部20は、判定部19によって乗車経験ありと判定された乗員の音出力の履歴情報を記憶部7から読み出して音量制御部21に出力し、乗車経験がない乗員には、履歴情報の代わりに、音出力のデフォルト情報を音量制御部21に出力する。
 なお、デフォルト情報は音出力装置のデフォルト値を示す情報であり、音出力装置ごとに対応付けて記憶部7に記憶される。
 音量制御部21は、推定部10または推定部10Aによって推定された乗員ごとの車内の雰囲気と、履歴情報取得部20によって取得された乗員ごとの音出力の履歴情報またはデフォルト情報とに基づいて、乗員ごとの音出力装置の音出力を制御する。
The determination part 19 determines the presence or absence of a passenger | crew's boarding experience based on the image signal by which the inside of a vehicle was image | photographed. The determination target occupant is specified by the result of acoustic analysis of the sound signal for each occupant.
The history information acquisition unit 20 reads out the history information of the sound output of the occupant determined by the determination unit 19 as having experience in riding from the storage unit 7 and outputs the history information to the volume control unit 21. Instead, the sound output default information is output to the volume control unit 21.
The default information is information indicating a default value of the sound output device, and is stored in the storage unit 7 in association with each sound output device.
The volume control unit 21 is based on the atmosphere in the vehicle for each occupant estimated by the estimation unit 10 or the estimation unit 10A, and the sound output history information or default information for each occupant acquired by the history information acquisition unit 20, The sound output of the sound output device for each passenger is controlled.
 音出力制御装置2Bにおける、推定部10または推定部10Aと制御部11Bとの機能は、処理回路によって実現される。すなわち、音出力制御装置2Bは、これらの機能を実行するための処理回路を備える。処理回路は、図2Aおよび図2Bに示したように、専用のハードウェアであっても、メモリに格納されるプログラムを実行するCPUであってもよい。 The functions of the estimation unit 10 or the estimation unit 10A and the control unit 11B in the sound output control device 2B are realized by a processing circuit. That is, the sound output control device 2B includes a processing circuit for executing these functions. As shown in FIGS. 2A and 2B, the processing circuit may be dedicated hardware or a CPU that executes a program stored in a memory.
 次に動作について説明する。
 図13は、制御部11Bの動作を示すフローチャートであって、図3のステップST2に相当する処理の詳細を示している。なお、音出力制御装置2Bとして、制御部11Bが推定部10と接続した構成を例に挙げている。
 まず、履歴情報取得部20が、推定部10によって推定された乗員ごとの車内の雰囲気の推定情報を取得する(ステップST1e)。
Next, the operation will be described.
FIG. 13 is a flowchart showing the operation of the control unit 11B, and shows details of processing corresponding to step ST2 of FIG. In addition, the structure which the control part 11B connected with the estimation part 10 is mentioned as an example as the sound output control apparatus 2B.
First, the history information acquisition unit 20 acquires the estimation information of the atmosphere in the vehicle for each occupant estimated by the estimation unit 10 (step ST1e).
 次に、判定部19は、車内が撮影された画像信号から乗員を特定し、特定した乗員に乗車経験があるか否かを判定する(ステップST2e)。
 例えば、乗車経験のある乗員の顔パターンを示す情報を記憶部7に記憶しておく。
 判定部19は、記憶部7に記憶された顔パターンと、画像信号を画像解析した得られた乗員の顔画像とを比較し、顔画像から抽出された顔の特徴と顔パターンとが一致する場合に、この乗員は乗車経験があると判定する。
 また、判定部19が、車内の音入力装置に入力された発話音声の音信号を音響解析して得られた乗員の声の音響特性と乗車経験のある乗員の声の音響特性とを比較し、これらが一致する場合に、この乗員は乗車経験があると判定してもよい。
Next, the determination unit 19 specifies an occupant from the image signal obtained by photographing the inside of the vehicle, and determines whether the specified occupant has a boarding experience (step ST2e).
For example, information indicating the face pattern of an occupant having a boarding experience is stored in the storage unit 7.
The determination unit 19 compares the face pattern stored in the storage unit 7 with the face image of the occupant obtained by image analysis of the image signal, and the facial features and face pattern extracted from the face image match. In this case, it is determined that the passenger has a riding experience.
In addition, the determination unit 19 compares the acoustic characteristics of the occupant's voice obtained by acoustic analysis of the sound signal of the uttered voice input to the sound input device in the vehicle and the acoustic characteristics of the occupant's voice with riding experience. If they match, it may be determined that the occupant has a riding experience.
 判定部19によって乗員に乗車経験があると判定された場合(ステップST2e;YES)、履歴情報取得部20は、この乗員に対応する音出力の履歴情報を記憶部7から取得する(ステップST3e)。実施の形態1で説明したように、音出力の履歴情報には、音出力装置に設定された出力音量が乗員ごとおよび音のソースごとに登録されており、乗員の識別情報が対応付けられている。履歴情報取得部20は、乗員の識別情報に対応する履歴情報を記憶部7から取得する。 When it is determined by the determination unit 19 that the occupant has boarding experience (step ST2e; YES), the history information acquisition unit 20 acquires history information of sound output corresponding to the occupant from the storage unit 7 (step ST3e). . As described in the first embodiment, in the sound output history information, the output volume set in the sound output device is registered for each occupant and for each sound source, and the identification information of the occupant is associated. Yes. The history information acquisition unit 20 acquires history information corresponding to the occupant identification information from the storage unit 7.
 判定部19によって乗員に乗車経験がないと判定されると(ステップST2e;NO)、履歴情報取得部20は、この乗員の座席に対応して設けられた音出力装置のデフォルト情報を記憶部7から取得する(ステップST4e)。このようにして取得された乗員ごとの音出力の履歴情報またはデフォルト情報は、音量制御部21に出力される。 When the determination unit 19 determines that the occupant has no boarding experience (step ST2e; NO), the history information acquisition unit 20 stores the default information of the sound output device provided corresponding to the occupant's seat in the storage unit 7 (Step ST4e). The sound output history information or default information for each occupant thus obtained is output to the volume control unit 21.
 音量制御部21は、乗員ごとの車内の雰囲気と音出力の履歴情報またはデフォルト値とに基づいて、乗員ごとの音出力装置の出力音量を決定する(ステップST5e)。
 まず、音量制御部21は、推定部10によって雰囲気が推定された乗員が、音出力の履歴情報またはデフォルト情報に対応する乗員のいずれであるかを確認する。推定部10によって座席ごとの乗員の上記推定情報が得られる。
 そこで、音量制御部21は、上記推定情報における乗員の座席と、音出力の履歴情報またはデフォルト情報に対応する座席とを比較した結果に基づいて、上記推定情報の乗員が音出力の履歴情報またはデフォルト情報の乗員のいずれであるかを決定する。
 そして、音量制御部21は、乗員ごとの車内の雰囲気と車内の乗員同士の位置関係とを考慮して、音出力の履歴情報またはデフォルト情報が示す出力音量を補正した値を、制御後の出力音量とする。
 例えば、音量制御部21は、乗員xの隣に着座している乗員yの履歴情報が示す出力音量が現在の出力音量よりも大きい場合に、乗員xが起きていれば、履歴情報が示す出力音量まで上昇させる。一方、乗員xが眠っていれば、音量制御部21は、履歴情報が示す出力音量よりも小さい値を、乗員yに対応する音出力装置の出力音量とする。
The volume control unit 21 determines the output volume of the sound output device for each occupant based on the atmosphere in the vehicle for each occupant and the sound output history information or default value (step ST5e).
First, the volume control unit 21 confirms whether the occupant whose atmosphere is estimated by the estimation unit 10 is the occupant corresponding to the sound output history information or the default information. The estimation unit 10 obtains the above estimation information of the occupant for each seat.
Therefore, the volume control unit 21 determines whether the occupant of the estimation information is the history information of the sound output or the occupant seat of the estimation information based on the result of comparing the seat of the occupant in the estimation information with the seat corresponding to the sound output history information or default information. Determine which of the occupants is the default information.
Then, the volume control unit 21 considers the atmosphere in the vehicle for each occupant and the positional relationship between the occupants in the vehicle, and outputs a value obtained by correcting the output volume indicated by the sound output history information or the default information after the control. Volume.
For example, when the output volume indicated by the history information of the occupant y seated next to the occupant x is larger than the current output volume, the volume control unit 21 outputs the history information if the occupant x has occurred. Increase to volume. On the other hand, if the occupant x is asleep, the volume control unit 21 sets a value smaller than the output volume indicated by the history information as the output volume of the sound output device corresponding to the occupant y.
 音量制御部21は、ステップST5eで決定した乗員ごとの出力音量で音出力するように音出力装置を制御する(ステップST6e)。音出力の履歴情報を用いることで、乗員ごとに対応する音出力装置の出力音量を決定するときの演算負荷を減らすことができる。
 なお、音出力の履歴情報は、音量制御部21によって自動で作成されて記憶部7に記憶される。例えば、音量制御部21は、乗員によって音量が手動で設定された場合、その音量が乗員にとって最適な音量であるものとして履歴情報を作成し、そのときの乗員の車内の雰囲気と車内ノイズとに対応付けて記憶部7に記憶する。
 乗員によって音量が設定されない場合、音量制御部21は、音出力装置に従前に設定されていた音量が乗員にとって最適な音量であるものとして履歴情報を作成し、このときの乗員の車内の雰囲気と車内ノイズとに対応付けて記憶部7に記憶する。
The volume control unit 21 controls the sound output device to output sound at the output volume for each occupant determined in step ST5e (step ST6e). By using the sound output history information, it is possible to reduce the calculation load when determining the output volume of the sound output device corresponding to each occupant.
The sound output history information is automatically created by the volume control unit 21 and stored in the storage unit 7. For example, when the volume is manually set by the occupant, the volume control unit 21 creates history information on the assumption that the volume is the optimal volume for the occupant. The data are stored in the storage unit 7 in association with each other.
When the volume is not set by the occupant, the volume control unit 21 creates history information on the assumption that the volume previously set according to the sound output device is the optimal volume for the occupant, and the passenger's interior atmosphere at this time The information is stored in the storage unit 7 in association with the in-vehicle noise.
 また、制御部11Bが、車内が撮影された画像信号に基づいて車内の乗員の有無を判定し、車内に複数の乗員がいると判定した場合に、図13の処理を開始するように制御してもよい。上記判定には、座席に設けられた重量センサなどのセンサ情報を用いてもよい。
 これにより、音出力制御装置2Bの電力消費を抑えることができる。
Further, when the control unit 11B determines the presence or absence of an occupant in the vehicle based on the image signal obtained by photographing the inside of the vehicle, the control unit 11B performs control so as to start the process of FIG. 13 when determining that there are a plurality of occupants in the vehicle. May be. Sensor information such as a weight sensor provided in the seat may be used for the determination.
Thereby, the power consumption of the sound output control device 2B can be suppressed.
 以上のように、実施の形態3に係る音出力制御装置2Bにおいて、制御部11Bが、乗員ごとの車内の雰囲気に加えて、乗員ごとの音出力の履歴情報に基づいて、音出力を乗員ごとに制御する。このように構成しても音出力が乗員ごとに制御されるので、車両の乗員ごとに適した音量で音出力することができる。 As described above, in the sound output control device 2B according to the third embodiment, the control unit 11B determines the sound output for each occupant based on the sound output history information for each occupant in addition to the atmosphere in the vehicle for each occupant. To control. Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
 なお、制御部11Bが、乗員ごとの車内の雰囲気、乗員ごとの車内ノイズおよび音出力の履歴情報に基づいて、音出力を乗員ごとに制御してもよい。
 例えば、記憶部7に記憶された音出力の履歴情報には、車内の雰囲気の推定情報に加えて、出力音量が設定されたときの車内ノイズを対応付けておく。履歴情報取得部20は、推定部10Aによって推定された車内の雰囲気の推定情報および車内ノイズに基づいて、記憶部7から乗員ごとの履歴情報を取得する。音量制御部21は、乗員ごとの車内の雰囲気の推定情報、車内ノイズの推定値および音出力の履歴情報またはデフォルト値に基づいて、乗員ごとの音出力装置の出力音量を決定する。このように構成しても音出力が乗員ごとに制御されるので、車両の乗員ごとに適した音量で音出力することができる。
The control unit 11B may control the sound output for each occupant based on the atmosphere in the vehicle for each occupant, the in-vehicle noise for each occupant, and the sound output history information.
For example, the sound output history information stored in the storage unit 7 is associated with vehicle interior noise when the output sound volume is set in addition to the estimation information of the atmosphere in the vehicle. The history information acquisition unit 20 acquires history information for each occupant from the storage unit 7 based on the estimation information of the atmosphere in the vehicle and the noise in the vehicle estimated by the estimation unit 10A. The volume control unit 21 determines the output volume of the sound output device for each occupant based on the estimated information of the atmosphere in the vehicle for each occupant, the estimated value of the in-vehicle noise, and the history information or default value of the sound output. Even with this configuration, sound output is controlled for each occupant, so that sound can be output at a volume suitable for each occupant of the vehicle.
実施の形態4.
 実施の形態4では、1対複数の乗員同士で行われる車内通話における音出力を制御する場合と、1対1の乗員同士で行われる車内通話における音出力を制御する場合とについて述べる。なお、実施の形態4に係る音出力制御システムおよび音出力制御装置の構成は、実施の形態1と同様であるので、以下では、音出力制御システムおよび音出力制御装置の構成について図1および図4を参照する。
Embodiment 4 FIG.
In the fourth embodiment, a case where sound output in an in-car call performed between one-to-one occupants and a case where sound output in an in-car call performed between one-to-one occupants are controlled are described. Since the configurations of the sound output control system and the sound output control device according to the fourth embodiment are the same as those of the first embodiment, the configurations of the sound output control system and the sound output control device will be described below with reference to FIGS. 4 is referred to.
 図14は、実施の形態4に係る音出力制御システム1の配置例を示す図である。図14に示す車両200Aは、3列座席の車両である。音入力装置4は、運転者である乗員Aに対応して設けられ、2列目の座席の乗員が携帯端末5aを所持し、3列目の座席の乗員が携帯端末5bを所持している。携帯端末5aと携帯端末5bは、図1に示した携帯端末5と同様の構成を有しており、同様に機能する。 FIG. 14 is a diagram illustrating an arrangement example of the sound output control system 1 according to the fourth embodiment. A vehicle 200A shown in FIG. 14 is a three-row seat vehicle. The sound input device 4 is provided corresponding to the occupant A who is a driver, and the occupant in the second row seats has the portable terminal 5a, and the occupant in the third row seat has the portable terminal 5b. . The portable terminal 5a and the portable terminal 5b have the same configuration as the portable terminal 5 shown in FIG. 1 and function in the same manner.
 運転者である乗員Aに対応して音出力装置3aが設けられ、助手席の乗員Bに対応して音出力装置3bが設けられ、2列目の乗員Cが着座する座席に対応して音出力装置3cが設けられ、2列目の乗員Dが着座する座席に対応して音出力装置3dが設けられている。さらに、3列目の乗員Eが着座する座席に対応して音出力装置3eが設けられ、3列目の乗員Fが着座する座席に対応して音出力装置3fが設けられている。 A sound output device 3a is provided corresponding to the occupant A who is the driver, a sound output device 3b is provided corresponding to the occupant B in the passenger seat, and a sound corresponding to the seat where the occupant C in the second row is seated. An output device 3c is provided, and a sound output device 3d is provided corresponding to the seat on which the passenger D in the second row is seated. Furthermore, a sound output device 3e is provided corresponding to the seat on which the third row of passengers E is seated, and a sound output device 3f is provided corresponding to the seat on which the third row of passengers F is seated.
 また、図15は、1対複数の乗員同士で行われる車内通話における音出力の制御処理を示すフローチャートである。
 以下、1対複数の乗員同士で行われる車内通話をパブリックモードの車内通話と呼び、乗員Aと乗員Bから乗員Fまでとによって行われるパブリックモードの車内通話における音出力の制御について説明する。
FIG. 15 is a flowchart showing sound output control processing in an in-car call performed between a plurality of occupants.
Hereinafter, in-car calls performed between a plurality of occupants will be referred to as public-mode in-car calls, and sound output control in public-mode in-car calls performed by occupant A and occupants B to F will be described.
 まず、演算処理部9が、親機の音入力装置を検出する(ステップST1f)。
 例えば、演算処理部9が、パブリックモードの親機として事前に設定された音入力装置または携帯端末を検出するか、あるいは、パブリックモードの車内通話の開始操作が実行された機器の音入力装置または携帯端末を親機の音入力装置として検出する。ここでは、音入力装置4が親機の音入力装置であり、音出力装置3aが親機の音出力装置であるものとする。
First, the arithmetic processing unit 9 detects the sound input device of the parent device (step ST1f).
For example, the arithmetic processing unit 9 detects a sound input device or a portable terminal set in advance as a parent device in the public mode, or a sound input device of a device in which a public mode in-car call start operation is performed or The portable terminal is detected as the sound input device of the parent device. Here, it is assumed that the sound input device 4 is the sound input device of the parent device, and the sound output device 3a is the sound output device of the parent device.
 制御部11は、親機以外の音入力装置を使用した通話をマスク(遮断)する(ステップST2f)。これにより、子機と親機との通話のみが有効になって、乗員Aと乗員Bから乗員Fまでとによって行われるパブリックモードの車内通話が開始される。
 なお、パブリックモードの車内通話では、親機と子機との双方向の会話が可能である。
The control unit 11 masks (blocks) a call using a sound input device other than the parent device (step ST2f). As a result, only the call between the child device and the parent device becomes valid, and the in-car call in the public mode performed by the passenger A and the passenger B to the passenger F is started.
In the in-car call in the public mode, a two-way conversation between the parent device and the child device is possible.
 取得部15は、親機の音入力装置4と子機である携帯端末5a,5bのマイクとに入力された入力音信号を取得する(ステップST3f)。これにより、取得部15は、音入力装置4に入力された1列目の座席周辺の音と、携帯端末5aのマイクに入力された2列目の座席周辺の音と、携帯端末5bのマイクに入力された3列目の座席周辺の音を、音信号として取得する。解析部16が、取得部15によって取得された音信号を音響解析して、乗員ごとの車内の雰囲気の推定基準となる情報を求める。 The acquisition unit 15 acquires input sound signals input to the sound input device 4 of the parent device and the microphones of the portable terminals 5a and 5b that are child devices (step ST3f). Thereby, the acquisition unit 15 receives the sound around the first row of seats input to the sound input device 4, the sound around the second row of seats input to the microphone of the mobile terminal 5a, and the microphone of the mobile terminal 5b. The sound around the third row of seats input to is acquired as a sound signal. The analysis unit 16 acoustically analyzes the sound signal acquired by the acquisition unit 15 and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant.
 次に、雰囲気推定部17は、解析部16によって求められた車内の雰囲気の推定基準となる情報に基づいて、車内の雰囲気を乗員ごとに推定する(ステップST4f)。
 例えば、雰囲気推定部17は、乗員ごとの音信号から、会話の内容、声のトーンおよび発話頻度に基づいて車内の雰囲気を乗員ごとに推定する。また、雰囲気推定部17は、乗員の顔の表情および仕草に基づいて車内の雰囲気を乗員ごとに推定してもよい。
Next, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on information serving as an estimation criterion for the atmosphere in the vehicle obtained by the analysis unit 16 (step ST4f).
For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency. Moreover, the atmosphere estimation part 17 may estimate the atmosphere in a vehicle for every passenger | crew based on a passenger's facial expression and gesture.
 制御部11は、雰囲気推定部17によって推定された乗員ごとの車内の雰囲気に基づいて出力音量を乗員ごとに決定し、決定した出力音量で音出力するように親機の音出力装置3aと複数の子機の音出力装置との音出力を制御する(ステップST5f)。
 なお、子機の音出力装置は、携帯端末5a,5bが備えるスピーカであってもよいが、乗員B~Fに対応する音出力装置3b~3fであってもよい。
 これにより、パブリックモードの車内通話において、会話に積極的な乗員に対応する音出力装置の出力音量を上げ、会話に積極的ではない乗員に対応する音出力装置の出力音量を下げる制御が行われる。また、眠っている乗員に対応する音出力装置の音量を0(出力停止)にする。
The control unit 11 determines an output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the atmosphere estimation unit 17, and outputs a sound with the determined output volume. The sound output with the sound output device of the slave unit is controlled (step ST5f).
Note that the sound output device of the child device may be a speaker provided in the mobile terminals 5a and 5b, but may also be the sound output devices 3b to 3f corresponding to the passengers B to F.
Thereby, in the in-car call in the public mode, control is performed to increase the output volume of the sound output device corresponding to an occupant who is active in conversation and to decrease the output volume of the sound output device corresponding to an occupant who is not active in conversation. . Also, the volume of the sound output device corresponding to the sleeping passenger is set to 0 (output stop).
 この後、制御部11は、パブリックモードの車内通話が終了したか否かを確認する(ステップST6f)。終了していなければ(ステップST6f;NO)、ステップST3fに戻る。通話が終了していれば(ステップST6f;YES)、図15の処理が終了される。これにより、パブリックモードの車内通話を乗員ごとに適した音量で行うことが可能である。 Thereafter, the control unit 11 confirms whether or not the in-car call in the public mode is finished (step ST6f). If not completed (step ST6f; NO), the process returns to step ST3f. If the telephone call has ended (step ST6f; YES), the processing in FIG. 15 is ended. As a result, in-car calls in public mode can be performed at a volume suitable for each occupant.
 これまでパブリックモードの車内通話における音出力の制御を説明したが、同様の制御を、1対複数でメディア再生音を配信するときの音出力の制御に適用してもよい。
 この場合、音出力制御装置2の制御部11は、親機のメディア再生器により再生されたメディア再生音を出力音として合成し、乗員ごとの車内の雰囲気に基づいて決定した出力音量で子機の音出力装置に上記合成音を音出力させる。
Although the sound output control in the in-car call in the public mode has been described so far, the same control may be applied to the control of the sound output when the media playback sound is distributed in a one-to-multiple manner.
In this case, the control unit 11 of the sound output control device 2 synthesizes the media reproduction sound reproduced by the media player of the parent machine as the output sound, and uses the output volume determined based on the atmosphere in the vehicle for each occupant. The synthesized sound is output to the sound output device.
 図16は、実施の形態4に係る音出力制御システム1の他の配置例を示す図である。図16に示す車両200Bは、3列以上の座席の車両である。音入力装置4は、運転者である乗員Aに対応して設けられ、最後列の座席の乗員が携帯端末5cを所持している。携帯端末5cは、図1に示した携帯端末5と同様の構成を有しており、同様に機能する。 FIG. 16 is a diagram illustrating another arrangement example of the sound output control system 1 according to the fourth embodiment. A vehicle 200B shown in FIG. 16 is a vehicle having three or more rows of seats. The sound input device 4 is provided corresponding to the occupant A who is the driver, and the occupant in the last row of seats has the portable terminal 5c. The portable terminal 5c has the same configuration as the portable terminal 5 shown in FIG. 1, and functions in the same manner.
 運転者である乗員Aに対応して音出力装置3aが設けられ、助手席の乗員Bに対応して音出力装置3bが設けられ、2列目の乗員Cが着座する座席に対応して音出力装置3cが設けられ、2列目の乗員Dが着座する座席に対応して音出力装置3dが設けられている。さらに、最後列の乗員Iが着座する座席に対応して音出力装置3iが設けられ、乗員Jが着座する座席に対応して音出力装置3jが設けられている。 A sound output device 3a is provided corresponding to the occupant A who is the driver, a sound output device 3b is provided corresponding to the occupant B in the passenger seat, and a sound corresponding to the seat where the occupant C in the second row is seated. An output device 3c is provided, and a sound output device 3d is provided corresponding to the seat on which the passenger D in the second row is seated. Further, a sound output device 3i is provided corresponding to the seat on which the occupant I in the last row is seated, and a sound output device 3j is provided corresponding to the seat on which the occupant J is seated.
 図17は、1対1の乗員同士で行われる車内通話における音出力の制御処理を示すフローチャートである。以下、1対1の乗員同士で行われる車内通話をプライベートモードの車内通話と呼び、乗員Aと乗員Jとによって行われるプライベートモードの車内通話における音出力の制御について説明する。 FIG. 17 is a flowchart showing a sound output control process in an in-car call performed between one-on-one passengers. Hereinafter, in-car calls performed between one-to-one occupants will be referred to as private mode in-car calls, and sound output control in private mode in-car calls performed by occupants A and J will be described.
 まず、演算処理部9が、親機の音入力装置を検出する(ステップST1g)。
 例えば、演算処理部9が、プライベートモードの親機として事前に設定された音入力装置または携帯端末を検出するか、プライベートモードの車内通話の開始操作が実行された機器の音入力装置または携帯端末を検出する。ここでは、音入力装置4が親機の音入力装置であり、音出力装置3aが親機の音出力装置であるものとする。
First, the arithmetic processing unit 9 detects the sound input device of the parent device (step ST1g).
For example, the arithmetic processing unit 9 detects a sound input device or portable terminal set in advance as a parent device in the private mode, or a sound input device or portable terminal of a device in which a private mode in-car call start operation is executed Is detected. Here, it is assumed that the sound input device 4 is the sound input device of the parent device, and the sound output device 3a is the sound output device of the parent device.
 次に、演算処理部9が、子機を選定する(ステップST2g)。
 例えば、演算処理部9が、プライベートモードの子機として事前に設定された音入力装置または携帯端末を検出するか、あるいは、プライベートモードの車内通話の開始操作が実行された機器の音入力装置または携帯端末を子機として検出する。ここでは、携帯端末5cが子機であるものとする。
Next, the arithmetic processing part 9 selects a subunit | mobile_unit (step ST2g).
For example, the arithmetic processing unit 9 detects a sound input device or a portable terminal set in advance as a private mode child device, or a sound input device of a device in which a private mode in-car call start operation is performed or A mobile terminal is detected as a slave unit. Here, it is assumed that the portable terminal 5c is a child device.
 取得部15は、親機の音入力装置4と子機である携帯端末5cのマイクとに入力された入力音信号を取得する(ステップST3g)。これにより、取得部15は、音入力装置4に入力された1列目の座席周辺の音と、携帯端末5cのマイクに入力された最後列の座席周辺の音を、音信号として取得する。解析部16が、取得部15によって取得された音信号を音響解析して、乗員ごとの車内の雰囲気の推定基準となる情報を求める。 The acquisition unit 15 acquires the input sound signal input to the sound input device 4 of the parent device and the microphone of the portable terminal 5c that is the child device (step ST3g). Thereby, the acquisition unit 15 acquires the sound around the first row of seats input to the sound input device 4 and the sound around the last row of seats input to the microphone of the mobile terminal 5c as sound signals. The analysis unit 16 acoustically analyzes the sound signal acquired by the acquisition unit 15 and obtains information serving as an estimation standard for the atmosphere in the vehicle for each occupant.
 次に、雰囲気推定部17は、解析部16によって求められた車内の雰囲気の推定基準となる情報に基づいて、車内の雰囲気を乗員ごとに推定する(ステップST4g)。
 例えば、雰囲気推定部17は、乗員ごとの音信号から、会話の内容、声のトーンおよび発話頻度に基づいて車内の雰囲気を乗員ごとに推定する。また、雰囲気推定部17は、乗員の顔の表情および仕草に基づいて車内の雰囲気を乗員ごとに推定してもよい。
Next, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant based on information serving as an estimation criterion for the atmosphere in the vehicle obtained by the analysis unit 16 (step ST4g).
For example, the atmosphere estimation unit 17 estimates the atmosphere in the vehicle for each occupant from the sound signal for each occupant based on the content of the conversation, the tone of the voice, and the utterance frequency. Moreover, the atmosphere estimation part 17 may estimate the atmosphere in a vehicle for every passenger | crew based on a passenger's facial expression and gesture.
 続いて、制御部11は、雰囲気推定部17によって推定された乗員ごとの車内の雰囲気に基づいて出力音量を乗員ごとに決定し、決定した出力音量で音出力するように親機の音出力装置3aの音出力と子機の音出力装置の音出力とを制御する(ステップST5g)。
 なお、子機の音出力装置は、携帯端末5cが備えるスピーカであってもよいが、乗員Jに対応する音出力装置3jであってもよい。
 これにより、プライベートモードの車内通話において、会話に積極的な乗員に対応する音出力装置の出力音量を上げ、会話に積極的ではない乗員に対応する音出力装置の出力音量を下げる制御が行われる。
Subsequently, the control unit 11 determines the output volume for each occupant based on the atmosphere in the vehicle for each occupant estimated by the atmosphere estimation unit 17, and outputs the sound at the determined output volume. The sound output of 3a and the sound output of the sound output device of the slave unit are controlled (step ST5g).
The sound output device of the child device may be a speaker included in the mobile terminal 5c, but may be a sound output device 3j corresponding to the passenger J.
As a result, in the in-car call in the private mode, control is performed to increase the output volume of the sound output device corresponding to an occupant who is active in conversation and to decrease the output volume of the sound output device corresponding to an occupant who is not active in conversation. .
 この後、制御部11は、プライベートモードの車内通話が終了したか否かを確認する(ステップST6g)。終了していなければ(ステップST6g;NO)、ステップST3gに戻る。通話が終了していれば(ステップST6g;YES)、図17の処理が終了される。これにより、プライベートモードの車内通話を乗員ごとに適した音量で行うことが可能である。 Thereafter, the control unit 11 confirms whether or not the in-car call in the private mode is finished (step ST6g). If not completed (step ST6g; NO), the process returns to step ST3g. If the call has ended (step ST6g; YES), the processing in FIG. 17 is ended. As a result, in-car calls in private mode can be performed at a volume suitable for each occupant.
 これまでプライベートモードの車内通話における音出力の制御を説明したが、同様の制御を、1対1でメディア再生音を配信するときの音出力の制御に適用してもよい。
 この場合、音出力制御装置2の制御部11は、親機のメディア再生器により再生されたメディア再生音を出力音として合成し、乗員ごとの車内の雰囲気に基づいて決定した出力音量で子機の音出力装置に上記合成音を音出力させる。
So far, the sound output control in the in-car call in the private mode has been described, but the same control may be applied to the control of the sound output when the media playback sound is distributed one-on-one.
In this case, the control unit 11 of the sound output control device 2 synthesizes the media reproduction sound reproduced by the media player of the parent machine as the output sound, and uses the output volume determined based on the atmosphere in the vehicle for each occupant. The synthesized sound is output to the sound output device.
 なお、音出力制御装置2が、パブリックモードの車内通話とプライベートモードの車内通話における音出力を制御する場合を示したが、音出力制御装置2Aが同様の制御をしてもよく、音出力制御装置2Bが同様の制御をしてもよい。 In addition, although the case where the sound output control device 2 controls the sound output in the in-car call in the public mode and the in-car call in the private mode has been shown, the sound output control device 2A may perform the same control. The apparatus 2B may perform the same control.
 以上のように、実施の形態4に係る音出力制御システム1において、制御部11が、パブリックモードの車内通話における音出力を乗員ごとに制御する。これにより、パブリックモードの車内通話を乗員ごとに適した音量で行うことが可能である。 As described above, in the sound output control system 1 according to Embodiment 4, the control unit 11 controls the sound output in the in-car call in the public mode for each occupant. As a result, in-car calls in public mode can be performed at a volume suitable for each occupant.
 また、実施の形態4に係る音出力制御システム1において、制御部11が、プライベートモードの車内通話における音出力を乗員ごとに制御する。これにより、プライベートモードの車内通話を乗員ごとに適した音量で行うことが可能である。 In the sound output control system 1 according to the fourth embodiment, the control unit 11 controls the sound output in the in-car call in the private mode for each occupant. As a result, in-car calls in private mode can be performed at a volume suitable for each occupant.
 実施の形態1から実施の形態4において、音出力制御システムを車両に搭載した構成を示したが、これに限定されるものではない。
 すなわち、音出力制御装置は、乗員ごとの音出力を制御可能であれば、車両に持ち込まれた携帯端末が備えてもよく、車両側と通信可能なサーバに搭載されてもよい。
In the first to fourth embodiments, the configuration in which the sound output control system is mounted on the vehicle has been described. However, the present invention is not limited to this.
That is, the sound output control device may be provided in a portable terminal brought into the vehicle as long as the sound output for each occupant can be controlled, or may be mounted on a server that can communicate with the vehicle side.
 なお、本発明はその発明の範囲内において、各実施の形態の自由な組み合わせ、あるいは各実施の形態の任意の構成要素の変形、もしくは各実施の形態において任意の構成要素の省略が可能である。 In the present invention, within the scope of the invention, any combination of each embodiment, any component of each embodiment can be modified, or any component can be omitted in each embodiment. .
 この発明に係る音出力制御装置は、車両の乗員ごとに適した音量で音出力することができるので、車両の乗員同士で行われる車内通話または車内のメディア再生における音出力の制御に好適である。 Since the sound output control device according to the present invention can output sound at a volume suitable for each vehicle occupant, it is suitable for control of sound output during in-car calls or media reproduction in the vehicle between vehicle occupants. .
 1 音出力制御システム、2,2A,2B 音出力制御装置、3a~3j 音出力装置、4,4a~4d 音入力装置、5,5a~5c 携帯端末、6 車両情報取得部、7 記憶部、8 撮影装置、9 演算処理部、10,10A 推定部、11,11A,11B 制御部、12a~12c 信号変換部、13a,13b 信号増幅部、14 通信I/F部、15,15A 取得部、16 解析部、17 雰囲気推定部、18 ノイズ推定部、19 判定部、20 履歴情報取得部、21 音量制御部、100,102 信号I/F、101 車載スピーカ、103 車載マイク、104 通信I/F、106 記憶装置、107 処理回路、108 CPU、109 メモリ、200,200A,200B 車両。 1 sound output control system, 2, 2A, 2B sound output control device, 3a-3j sound output device, 4, 4a-4d sound input device, 5, 5a-5c mobile terminal, 6 vehicle information acquisition unit, 7 storage unit, 8 imaging device, 9 arithmetic processing unit, 10, 10A estimation unit, 11, 11A, 11B control unit, 12a-12c signal conversion unit, 13a, 13b signal amplification unit, 14 communication I / F unit, 15, 15A acquisition unit, 16 analysis unit, 17 atmosphere estimation unit, 18 noise estimation unit, 19 determination unit, 20 history information acquisition unit, 21 volume control unit, 100, 102 signal I / F, 101 in-vehicle speaker, 103 in-vehicle microphone, 104 communication I / F , 106 storage device, 107 processing circuit, 108 CPU, 109 memory, 200, 200A, 200B vehicle.

Claims (11)

  1.  車両の複数の乗員のそれぞれの車内の雰囲気を推定する推定部と、
     前記推定部によって推定された乗員ごとの車内の雰囲気に基づいて、音出力を乗員ごとに制御する制御部と
     を備えたことを特徴とする音出力制御装置。
    An estimation unit for estimating an atmosphere in each of a plurality of occupants of the vehicle;
    A sound output control device comprising: a control unit that controls sound output for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit.
  2.  前記制御部は、乗員ごとの車内の雰囲気に加えて、乗員ごとの車内ノイズおよび乗員ごとの音出力の履歴情報のうちの少なくとも一方に基づいて、音出力を乗員ごとに制御すること
     を特徴とする請求項1記載の音出力制御装置。
    The control unit controls sound output for each occupant based on at least one of in-vehicle noise for each occupant and in-vehicle noise for each occupant and history information of sound output for each occupant. The sound output control device according to claim 1.
  3.  前記制御部は、車両の乗員同士で行われる車内通話の音出力を制御すること
     を特徴とする請求項1記載の音出力制御装置。
    The sound output control device according to claim 1, wherein the control unit controls sound output of an in-car call performed between occupants of the vehicle.
  4.  前記制御部は、メディアの再生音の出力を制御すること
     を特徴とする請求項1記載の音出力制御装置。
    The sound output control apparatus according to claim 1, wherein the control unit controls the output of the reproduction sound of the media.
  5.  車内に設けられた複数の音出力装置と、
     車両の複数の乗員のそれぞれの車内の雰囲気を推定する推定部と、
     前記推定部によって推定された乗員ごとの車内の雰囲気に基づいて、前記音出力装置の音出力を乗員ごとに制御する制御部と
     を備えたことを特徴とする音出力制御システム。
    A plurality of sound output devices provided in the vehicle;
    An estimation unit for estimating an atmosphere in each of a plurality of occupants of the vehicle;
    A sound output control system comprising: a control unit that controls the sound output of the sound output device for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit.
  6.  前記制御部は、乗員ごとの車内の雰囲気に加えて、乗員ごとの車内ノイズおよび乗員ごとの音出力の履歴情報のうちの少なくとも一方に基づいて、前記音出力装置の音出力を乗員ごとに制御すること
     を特徴とする請求項5記載の音出力制御システム。
    The control unit controls the sound output of the sound output device for each occupant based on at least one of in-vehicle noise for each occupant and history information of sound output for each occupant in addition to the atmosphere in the vehicle for each occupant. The sound output control system according to claim 5.
  7.  前記音出力装置は、車載スピーカであること
     を特徴とする請求項5記載の音出力制御システム。
    The sound output control system according to claim 5, wherein the sound output device is a vehicle-mounted speaker.
  8.  前記音出力装置は、携帯端末が備えるスピーカであること
     を特徴とする請求項5記載の音出力制御システム。
    The sound output control system according to claim 5, wherein the sound output device is a speaker included in a mobile terminal.
  9.  前記制御部は、1対複数の乗員同士で行われる車内通話における前記音出力装置の音出力を乗員ごとに制御すること
     を特徴とする請求項5記載の音出力制御システム。
    The sound output control system according to claim 5, wherein the control unit controls sound output of the sound output device for each passenger in an in-car call performed between a plurality of passengers.
  10.  前記制御部は、1対1の乗員同士で行われる車内通話における前記音出力装置の音出力を乗員ごとに制御すること
     を特徴とする請求項5記載の音出力制御システム。
    The sound output control system according to claim 5, wherein the control unit controls the sound output of the sound output device for each passenger in an in-car call performed between one-to-one passengers.
  11.  推定部が、車両の複数の乗員のそれぞれの車内の雰囲気を推定するステップと、
     制御部が、前記推定部によって推定された乗員ごとの車内の雰囲気に基づいて、音出力を乗員ごとに制御するステップと
     を備えたことを特徴とする音出力制御方法。
    An estimation unit estimating an atmosphere in each of a plurality of passengers of the vehicle;
    A control unit comprising: a step of controlling sound output for each occupant based on the atmosphere in the vehicle for each occupant estimated by the estimation unit.
PCT/JP2017/011171 2017-03-21 2017-03-21 Sound output control device, sound output control system, and sound output control method WO2018173112A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/011171 WO2018173112A1 (en) 2017-03-21 2017-03-21 Sound output control device, sound output control system, and sound output control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2017/011171 WO2018173112A1 (en) 2017-03-21 2017-03-21 Sound output control device, sound output control system, and sound output control method

Publications (1)

Publication Number Publication Date
WO2018173112A1 true WO2018173112A1 (en) 2018-09-27

Family

ID=63585972

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/011171 WO2018173112A1 (en) 2017-03-21 2017-03-21 Sound output control device, sound output control system, and sound output control method

Country Status (1)

Country Link
WO (1) WO2018173112A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021165004A1 (en) * 2020-02-19 2021-08-26 Bayerische Motoren Werke Aktiengesellschaft Method and control unit for operating a noise suppression unit of a vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0510103U (en) * 1991-07-23 1993-02-09 有限会社天水リサーチ Vehicle communication device
JPH0983277A (en) * 1995-09-18 1997-03-28 Fujitsu Ten Ltd Sound volume adjustment device
JP2006293145A (en) * 2005-04-13 2006-10-26 Nissan Motor Co Ltd Unit and method for active vibration control
JP2008021337A (en) * 2006-07-10 2008-01-31 Alpine Electronics Inc On-vehicle acoustic system
JP2013207580A (en) * 2012-03-28 2013-10-07 Jvc Kenwood Corp Acoustic parameter setting device, server, acoustic parameter setting method and program
JP2014167438A (en) * 2013-02-28 2014-09-11 Denso Corp Information notification device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0510103U (en) * 1991-07-23 1993-02-09 有限会社天水リサーチ Vehicle communication device
JPH0983277A (en) * 1995-09-18 1997-03-28 Fujitsu Ten Ltd Sound volume adjustment device
JP2006293145A (en) * 2005-04-13 2006-10-26 Nissan Motor Co Ltd Unit and method for active vibration control
JP2008021337A (en) * 2006-07-10 2008-01-31 Alpine Electronics Inc On-vehicle acoustic system
JP2013207580A (en) * 2012-03-28 2013-10-07 Jvc Kenwood Corp Acoustic parameter setting device, server, acoustic parameter setting method and program
JP2014167438A (en) * 2013-02-28 2014-09-11 Denso Corp Information notification device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021165004A1 (en) * 2020-02-19 2021-08-26 Bayerische Motoren Werke Aktiengesellschaft Method and control unit for operating a noise suppression unit of a vehicle

Similar Documents

Publication Publication Date Title
CN110070868B (en) Voice interaction method and device for vehicle-mounted system, automobile and machine readable medium
CN101064975B (en) Vehicle communication system
US20140112496A1 (en) Microphone placement for noise cancellation in vehicles
CN109119060B (en) Active noise reduction method and system applied to automobile
US9743213B2 (en) Enhanced auditory experience in shared acoustic space
JP4779748B2 (en) Voice input / output device for vehicle and program for voice input / output device
US20160127827A1 (en) Systems and methods for selecting audio filtering schemes
JP6466385B2 (en) Service providing apparatus, service providing method, and service providing program
US10805730B2 (en) Sound input/output device for vehicle
JP7049803B2 (en) In-vehicle device and audio output method
US11089404B2 (en) Sound processing apparatus and sound processing method
CN110696756A (en) Vehicle volume control method and device, automobile and storage medium
US20240096343A1 (en) Voice quality enhancement method and related device
CN110402584A (en) Interior call control apparatus, cabincommuni cation system and interior call control method
US10020785B2 (en) Automatic vehicle occupant audio control
CN115482830A (en) Speech enhancement method and related equipment
EP3618465B1 (en) Vehicle communication system and method of operating vehicle communication systems
WO2018173112A1 (en) Sound output control device, sound output control system, and sound output control method
CN114194128A (en) Vehicle volume control method, vehicle, and storage medium
CN115831141B (en) Noise reduction method and device for vehicle-mounted voice, vehicle and storage medium
JP6785889B2 (en) Service provider
JP2005354223A (en) Sound source information processing apparatus, sound source information processing method, and sound source information processing program
WO2020016927A1 (en) Sound field control apparatus and sound field control method
JP6332072B2 (en) Dialogue device
JP6995254B2 (en) Sound field control device and sound field control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17901960

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17901960

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP