WO2020208667A1 - Audio output control device and audio output control method - Google Patents

Audio output control device and audio output control method Download PDF

Info

Publication number
WO2020208667A1
WO2020208667A1 PCT/JP2019/015251 JP2019015251W WO2020208667A1 WO 2020208667 A1 WO2020208667 A1 WO 2020208667A1 JP 2019015251 W JP2019015251 W JP 2019015251W WO 2020208667 A1 WO2020208667 A1 WO 2020208667A1
Authority
WO
WIPO (PCT)
Prior art keywords
time
control
audio output
face
control device
Prior art date
Application number
PCT/JP2019/015251
Other languages
French (fr)
Japanese (ja)
Inventor
政博 秋田
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2021513033A priority Critical patent/JP6887588B2/en
Priority to PCT/JP2019/015251 priority patent/WO2020208667A1/en
Publication of WO2020208667A1 publication Critical patent/WO2020208667A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present invention relates to a voice output control device that controls a plurality of voice output devices mounted on a vehicle, and a voice output control method.
  • the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique capable of stabilizing stereo reproduction.
  • the voice output control device is a voice output control device that performs real-time control for controlling a plurality of voice output devices mounted on a vehicle in real time based on the position of the face of a passenger on the vehicle.
  • Real-time when it is determined that the position fixing time, which is the time when the acquisition unit for acquiring the position of the person's face and the position of the face acquired by the acquisition unit are fixed, is longer than the predetermined time.
  • a plurality of voice output devices are intermittently controlled rather than real-time control based on the face position acquired by the acquisition unit. It is provided with a control unit that performs intermittent control to be controlled.
  • real-time control is performed when it is determined that the position fixing time is longer than a predetermined time, and intermittent when it is determined that the position fixing time is equal to or less than a predetermined time. Since control is performed, stabilization of stereo reproduction can be realized.
  • FIG. It is a block diagram which shows the structure of the audio output control apparatus which concerns on Embodiment 1.
  • FIG. It is a schematic diagram which shows the vehicle apparatus which concerns on Embodiment 2, and its periphery. It is a block diagram which shows the structure of the vehicle apparatus which concerns on Embodiment 2. It is a flowchart which shows the operation of the vehicle apparatus which concerns on Embodiment 2.
  • FIG. 1 is a block diagram showing a configuration of an audio output control device 1 according to a first embodiment of the present invention.
  • the vehicle on which the voice output control device 1 is mounted and which is the subject of attention will be described as “own vehicle”.
  • the voice output control device 1 of FIG. 1 is wirelessly or wiredly connected to a plurality of voice output devices 51 mounted on the own vehicle so as to be able to communicate with each other.
  • the plurality of audio output devices 51 are, for example, a plurality of speakers provided in the interior of the own vehicle.
  • the voice output control device 1 includes an acquisition unit 11 and a control unit 12.
  • the acquisition unit 11 acquires the position of the face of the passenger of the own vehicle.
  • the passenger of the own vehicle will be described as being the driver of the own vehicle.
  • an image analysis device that acquires the position of the face from the image of the driver captured by the camera, an interface thereof, or the like is applied to the acquisition unit 11.
  • the control unit 12 selectively performs real-time control and intermittent control for the plurality of audio output devices 51.
  • real-time control the control unit 12 controls a plurality of voice output devices 51 in real time based on the position of the face acquired by the acquisition unit 11.
  • the control unit 12 controls the plurality of voice output devices 51 intermittently rather than the real-time control based on the position of the face acquired by the acquisition unit 11. That is, in the intermittent control, the time interval for controlling the plurality of audio output devices 51 is longer than that in the real-time control.
  • the parameters related to the audio output timing and volume of each audio output device 51 are adjusted by the above real-time control and intermittent control. As a result, for example, stereo reproduction is performed in which a sound field or a sound image is formed with respect to the position of the face.
  • the positions of the plurality of audio output devices 51 are the default positions, and real-time control and intermittent control are selectively performed.
  • the positions of the plurality of audio output devices 51 are not limited to the default positions.
  • control unit 12 determines that the face position is fixed when the change of the face position acquired by the acquisition unit 11 per unit time is equal to or less than a predetermined value.
  • the unit time is, for example, 0.1 seconds
  • the predetermined value is, for example, 1 cm in the actual distance.
  • control unit 12 obtains the time during which the determination that the position is fixed continues as the position fixing time, and determines whether or not the position fixing time is larger than the predetermined time.
  • the predetermined time is, for example, 3 seconds.
  • the control unit 12 performs real-time control when it is determined that the position fixing time is longer than a predetermined time, and performs intermittent control when it is determined that the position fixing time is equal to or less than a predetermined time. ..
  • ⁇ Summary of Embodiment 1> According to the voice output control device 1 according to the first embodiment as described above, when it is determined that the position fixing time is longer than the predetermined time, real-time control is performed and the position fixing time is set in advance. Intermittent control is performed when it is determined that the time is less than or equal to the specified time. According to such a configuration, it is possible to appropriately perform intermittent control instead of always performing real-time control, so that stabilization of stereo reproduction can be realized. As a result, it is possible to prevent the passengers of the own vehicle other than the driver from feeling uncomfortable and uncomfortable. In addition, the processing load of the control unit 12 can be expected to be reduced.
  • FIG. 2 is a schematic view showing the vehicle device 1a and its surroundings according to the second embodiment of the present invention.
  • the same or similar components as those described above will be designated by the same or similar reference numerals, and different components will be mainly described.
  • the camera 53 connected to the vehicle device 1a captures an image of the interior 41 of the own vehicle.
  • the camera 53 is arranged in the own vehicle so that the captured image includes the driver 46 of the own vehicle and the plurality of audio output devices 51.
  • the vehicle device 1a not only has the function of the voice output control device 1 described in the first embodiment, but also has a DMS (Driver Monitoring System) function that monitors the consciousness state and health state of the passenger and supports driving. are doing.
  • the vehicle device 1a includes an image analysis unit 11a, a fixed determination unit 12a, a change determination unit 12b, and a parameter change unit 12c.
  • the image analysis unit 11a is included in the concept of the acquisition unit 11 in FIG.
  • the fixed determination unit 12a, the change determination unit 12b, and the parameter change unit 12c are included in the concept of the control unit 12 of FIG.
  • the image analysis unit 11a acquires the position of the face of the driver 46 and the positions of the plurality of audio output devices 51 by analyzing the image captured by the camera 53.
  • the acquired face position is used not only for controlling the plurality of voice output devices 51 described below, but also for the DMS function of the vehicle device 1a and the like.
  • FIG. 3 is a block diagram showing the configuration of the vehicle device 1a according to the second embodiment.
  • the fixing determination unit 12a determines whether or not the position fixing time is longer than a predetermined time, similarly to the control unit 12 described in the first embodiment.
  • the change determination unit 12b determines whether or not the position fixing time is equal to or less than a predetermined time. Since it is assumed that the result of the above determination of the fixed determination unit 12a and the result of the above determination of the change determination unit 12b have an exact opposite relationship, only one of the above two determinations is determined. You may. In the following, for the sake of simplicity, the fixed determination unit 12a will perform the above determination, that is, determine whether or not the position fixing time is larger than a predetermined time.
  • the predetermined control is, for example, the control of the DMS function
  • the predetermined processing load is, for example, the maximum processing load of the vehicle device 1a. This determination may be performed by the change determination unit 12b instead of the fixed determination unit 12a.
  • the change determination unit 12b determines whether or not the position of the face acquired by the image analysis unit 11a fluctuates periodically. This cycle is set to a time shorter than a predetermined time compared to the fixed position time.
  • the parameter changing unit 12c performs real-time control when it is determined that the position fixing time is longer than the predetermined time, and the position fixing time is predetermined. Intermittent control is performed when it is determined that the time is less than the specified time. At this time, the parameter changing unit 12c uses the positions of the plurality of audio output devices 51 acquired by the image analysis unit 11a for real-time control and intermittent control.
  • the parameter changing unit 12c obtains the distance between the face and each voice output device 51 based on the position of the face acquired by the image analysis unit 11a and the position of each voice output device 51.
  • the parameters of each audio output device 51 are changed based on the distance. For example, the longer the distance, the earlier the audio output timing of the audio output device 51 or the higher the volume thereof.
  • the parameter changing unit 12c determines whether or not the position fixing time is larger than the predetermined time. Real-time control is performed regardless of the judgment result.
  • the parameter changing unit 12c determines whether or not the position fixing time is larger than the predetermined time when the change of the parameter used for the real-time control is larger than the predetermined threshold value, and , Intermittent control is performed regardless of the determination result as to whether or not the sum is smaller than the predetermined processing load.
  • the parameter changing unit 12c uses the center position of the facial fluctuation for intermittent control when the change determining unit 12b determines that the position of the face fluctuates periodically. That is, the parameter changing unit 12c performs intermittent control with respect to the center position of the fluctuation of the face.
  • the center position of the fluctuation of the face corresponds to the center position of the vibration when the periodic fluctuation of the face is regarded as a simple vibration.
  • FIG. 4 is a flowchart showing the operation of the vehicle device 1a according to the second embodiment. This operation is performed, for example, for each periodic input from the camera 53 to the vehicle device 1a.
  • step S1 the image analysis unit 11a acquires the position of the driver 46's face by analyzing the image captured by the camera 53.
  • step S2 the fixed determination unit 12a determines whether or not the position of the face of the previously acquired image and the position of the face of the image acquired this time are completely the same. That is, the fixed determination unit 12a determines whether or not the position of the face is completely fixed. If it is determined that they are completely the same, the operation of FIG. 4 ends, and if it is determined that they are not completely the same, the process proceeds to step S3.
  • step S3 the fixed determination unit 12a determines whether or not the sum of loads is smaller than the predetermined processing load. If it is determined that the sum of loads is smaller than the predetermined processing load, the process proceeds to step S5, and if it is determined that the sum of loads is equal to or greater than the predetermined processing load, the process proceeds to step S4. move on.
  • step S4 the fixing determination unit 12a determines whether or not the position fixing time is longer than a predetermined time. If it is determined that the fixed position time is longer than the predetermined time, the process proceeds to step S5, and if it is determined that the fixed position time is equal to or less than the predetermined time, the process proceeds to step S7. move on.
  • step S5 the parameter changing unit 12c changes the parameters used for real-time control based on the position of the face.
  • the time when the parameter change is reflected in the plurality of audio output devices 51 is in step S9, and the parameter change is not reflected in the plurality of audio output devices 51 at the time of step S5.
  • step S6 the parameter changing unit 12c determines whether or not the change in the parameter used for real-time control is larger than a predetermined threshold value. If it is determined that the parameter change is larger than the predetermined threshold value, the process proceeds to step S7, and if it is determined that the parameter change is equal to or less than the predetermined threshold value, the process proceeds to step S9. move on.
  • step S7 the change determination unit 12b determines whether or not the position of the face fluctuates periodically. If it is determined that the position of the face fluctuates periodically, the process proceeds to step S8, and if it is determined that the position of the face does not fluctuate periodically, the operation of FIG. 4 ends.
  • step S8 the parameter changing unit 12c changes the parameters used for intermittent control based on the center position of the facial fluctuation.
  • the parameter change in step S8 is performed intermittently compared to the parameter change in step S6. Further, the time point at which the parameter change is reflected in the plurality of audio output devices 51 is step S9, and at the time of step S8, the parameter change is not reflected in the plurality of audio output devices 51. After that, the process proceeds to step S9.
  • step S9 the parameter changing unit 12c outputs the parameters to the plurality of audio output devices 51.
  • intermittent control is performed when step S9 is performed through step S8, and real-time control is performed when step S9 is performed without going through step S8. After that, the operation of FIG. 4 ends.
  • the determination result as to whether or not the position fixing time is larger than the predetermined time is larger than the predetermined time. Intermittent control is performed regardless. According to such a configuration, it is possible to stabilize the stereo reproduction and reduce the processing load of the control unit 12.
  • the stereo target position to be reproduced in stereo also sways from side to side. If these phases are accidentally reversed, that is, if the face is on the left side, the stereo target position is on the right side, and if the face is on the right side, the stereo target position is on the left side. The deviation becomes relatively large.
  • the central position of the fluctuation of the face is used for intermittent control. According to such a configuration, it is possible to reduce the above-mentioned deviation between the face position and the stereo target position.
  • the positions of the plurality of audio output devices 51 are used for real-time control and intermittent control. According to such a configuration, it is possible to reduce the deviation between the position of the face and the position of the stereo target.
  • the passenger whose face position is acquired is described as being a driver, but may be a passenger, or both a driver and a passenger. You may.
  • the passenger whose face position is acquired is both the driver and the passenger, it is preferable to use a noise canceller so that the voice for one does not become noise for the other.
  • the parameter changing unit 12c used the central position of the facial fluctuation for intermittent control when it was determined that the facial position fluctuated periodically.
  • the present invention is not limited to this, and the parameter changing unit 12c may obtain the frequency of the face position acquired by the image analysis unit 11a and use the face position having the highest frequency for intermittent control. Even with such a configuration, it is possible to reduce the deviation between the face position and the stereo target position.
  • the camera 53 is arranged in the own vehicle so that the captured image includes a plurality of audio output devices 51, and the image analysis unit 11a analyzes the image captured by the camera 53. As a result, the positions of the plurality of audio output devices 51 were acquired.
  • the present invention is not limited to this, and the image analysis unit 11a may identify the vehicle type and grade of the own vehicle based on the image of the interior 41 of the own vehicle. Then, the image analysis unit 11a follows a correspondence relationship between the vehicle type and grade previously associated with the memory (not shown) of the vehicle device 1a and the positions of the plurality of audio output devices 51, and the image analysis unit 11a has a plurality of audio output devices based on the identification result.
  • the position of 51 may be acquired. According to such a configuration, the positions of the plurality of audio output devices 51 can be acquired even if the image of the camera 53 does not include the plurality of audio output devices 51.
  • the above-mentioned acquisition unit 11 and control unit 12 are hereinafter referred to as “acquisition unit 11 and the like”.
  • the acquisition unit 11 and the like are realized by the processing circuit 81 shown in FIG. That is, it is determined that the processing circuit 81 has the acquisition unit 11 that acquires the position of the occupant's face, and the position fixing time, which is the time during which the acquired face position is fixed, is larger than a predetermined time.
  • a control unit 12 is provided which performs real-time control in such a case and performs intermittent control when it is determined that the position fixing time is equal to or less than a predetermined time.
  • Dedicated hardware may be applied to the processing circuit 81, or a processor that executes a program stored in the memory may be applied. Examples of the processor include a central processing unit, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), and the like.
  • the processing circuit 81 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate). Array), or a combination of these.
  • Each of the functions of each part such as the acquisition unit 11 may be realized by a circuit in which the processing circuits are dispersed, or the functions of each part may be collectively realized by one processing circuit.
  • the processing circuit 81 When the processing circuit 81 is a processor, the functions of the acquisition unit 11 and the like are realized by combining with software and the like.
  • the software and the like correspond to, for example, software, firmware, or software and firmware.
  • Software and the like are described as programs and stored in memory.
  • the processor 82 applied to the processing circuit 81 realizes the functions of each part by reading and executing the program stored in the memory 83. That is, when the voice output control device 1 is executed by the processing circuit 81, the step of acquiring the position of the occupant's face and the position fixing time, which is the time during which the acquired face position is fixed, are predetermined.
  • a step of performing real-time control when it is determined that the time is longer than the predetermined time and performing intermittent control when it is determined that the position fixing time is less than or equal to the predetermined time is executed.
  • a memory 83 for storing a program to be used is provided. In other words, it can be said that this program causes the computer to execute the procedure or method of the acquisition unit 11 or the like.
  • the memory 83 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc), its drive device, etc., or any storage medium that will be used in the future. You may.
  • each function of the acquisition unit 11 and the like is realized by either hardware or software has been described above.
  • the present invention is not limited to this, and a configuration may be configured in which a part of the acquisition unit 11 or the like is realized by dedicated hardware and another part is realized by software or the like.
  • the acquisition unit 11 realizes its function by a processing circuit 81 as dedicated hardware, an interface, a receiver, and the like, and other than that, the processing circuit 81 as a processor 82 reads a program stored in the memory 83. It is possible to realize the function by executing it.
  • the processing circuit 81 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
  • the voice output control device 1 described above includes a vehicle device such as a PND (Portable Navigation Device), a navigation device, and a DMS device, a communication terminal including a mobile terminal such as a mobile phone, a smartphone, and a tablet, a vehicle device, and the vehicle device. It can also be applied to a voice output control system constructed as a system by appropriately combining a function of an application installed on at least one of communication terminals and a server. In this case, each function or each component of the audio output control device 1 described above may be distributed and arranged in each device for constructing the system, or may be arranged in a concentrated manner in any of the devices. May be good.
  • FIG. 7 is a block diagram showing the configuration of the server 91 according to this modification.
  • the server 91 of FIG. 7 includes a communication unit 91a and a control unit 91b, and can perform wireless communication with a vehicle device 93 such as a navigation device and a DMS device of the vehicle 92.
  • a vehicle device 93 such as a navigation device and a DMS device of the vehicle 92.
  • the communication unit 91a which is the acquisition unit, receives the position of the passenger's face acquired by the vehicle device 93 by performing wireless communication with the vehicle device 93.
  • the control unit 91b has the same function as the control unit 12 of FIG. 1 when a processor (not shown) of the server 91 or the like executes a program stored in a memory (not shown) of the server 91. That is, the control unit 91b performs real-time control when it is determined that the position fixing time, which is the time when the received face position is fixed, is longer than the predetermined time, and the position fixing time is predetermined. A control signal for performing intermittent control is generated when it is determined that the time is less than or equal to the specified time. Then, the communication unit 91a transmits the control signal generated by the control unit 91b to the vehicle device 93. According to the server 91 configured in this way, the same effect as that of the voice output control device 1 described in the first embodiment can be obtained.
  • FIG. 8 is a block diagram showing the configuration of the communication terminal 96 according to this modification.
  • the communication terminal 96 of FIG. 8 includes a communication unit 96a similar to the communication unit 91a and a control unit 96b similar to the control unit 91b, and can perform wireless communication with the vehicle device 98 of the vehicle 97. ing.
  • a mobile terminal such as a mobile phone, a smartphone, or a tablet carried by the driver of the vehicle 97 is applied to the communication terminal 96, for example.
  • the communication terminal 96 configured in this way, the same effect as that of the voice output control device 1 described in the first embodiment can be obtained.
  • each embodiment and each modification can be freely combined, and each embodiment and each modification can be appropriately modified or omitted.
  • 1 voice output control device 11 acquisition unit, 12 control unit, 41 indoor, 46 driver, 51 voice output device.

Abstract

The objective of the invention is to provide a technique for enabling stabilization of stereo reproduction. An audio output control device according to the invention is provided with an acquisition unit and a control unit. The acquisition unit acquires the position of the face of an occupant. The control unit performs a real time control when it is determined that a position fixed time, which is a time for which the acquired face position is fixed, is greater than a predetermined time, and the control unit performs an intermittent control in which a plurality of audio output devices are controlled intermittently, rather than in real time control manner, on the basis of the face position acquired by the acquisition unit when it is determined that the position fixed time is equal to or less than the predetermined time.

Description

音声出力制御装置及び音声出力制御方法Audio output control device and audio output control method
 本発明は、車両に搭載された複数の音声出力装置を制御する音声出力制御装置、及び、音声出力制御方法に関する。 The present invention relates to a voice output control device that controls a plurality of voice output devices mounted on a vehicle, and a voice output control method.
 近年、車両の室内の画像からユーザの顔の位置を検出し、当該顔の位置に基づいて複数の車載スピーカをリアルタイムに制御する技術が提案されている(例えば特許文献1)。このような構成によれば、ユーザの顔の位置が変化しても、適切なステレオ再生を当該ユーザに提供することが可能となる。 In recent years, a technique has been proposed in which the position of a user's face is detected from an image inside the vehicle and a plurality of in-vehicle speakers are controlled in real time based on the position of the face (for example, Patent Document 1). According to such a configuration, even if the position of the user's face changes, it is possible to provide the user with appropriate stereo reproduction.
特開2011-244431号公報Japanese Unexamined Patent Publication No. 2011-244431
 しかしながら、上記技術によれば、ユーザがリズムに乗って頭を揺らしている場合などのように、連続してユーザの位置が動いている場合には、ステレオ再生、ひいては音場などが常に変化することになる。その結果、ユーザ以外の車両の搭乗者に、違和感ひいては不快感が生じるという問題があった。 However, according to the above technology, when the user's position is continuously moving, such as when the user is shaking his or her head in rhythm, the stereo reproduction, and eventually the sound field, etc., constantly change. It will be. As a result, there is a problem that passengers of vehicles other than the user feel uncomfortable and uncomfortable.
 そこで、本発明は、上記のような問題点を鑑みてなされたものであり、ステレオ再生の安定化が可能な技術を提供することを目的とする。 Therefore, the present invention has been made in view of the above problems, and an object of the present invention is to provide a technique capable of stabilizing stereo reproduction.
 本発明に係る音声出力制御装置は、車両の搭乗者の顔の位置に基づいて、車両に搭載された複数の音声出力装置をリアルタイムで制御するリアルタイム制御を行う音声出力制御装置であって、搭乗者の顔の位置を取得する取得部と、取得部で取得された顔の位置が固定されている時間である位置固定時間が、予め定められた時間よりも大きいと判定された場合に、リアルタイム制御を行い、位置固定時間が、予め定められた時間以下であると判定された場合に、取得部で取得された顔の位置に基づいて、複数の音声出力装置をリアルタイム制御よりも断続的に制御する断続制御を行う制御部とを備える。 The voice output control device according to the present invention is a voice output control device that performs real-time control for controlling a plurality of voice output devices mounted on a vehicle in real time based on the position of the face of a passenger on the vehicle. Real-time when it is determined that the position fixing time, which is the time when the acquisition unit for acquiring the position of the person's face and the position of the face acquired by the acquisition unit are fixed, is longer than the predetermined time. When control is performed and it is determined that the fixed position time is less than or equal to a predetermined time, a plurality of voice output devices are intermittently controlled rather than real-time control based on the face position acquired by the acquisition unit. It is provided with a control unit that performs intermittent control to be controlled.
 本発明によれば、位置固定時間が、予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が、予め定められた時間以下であると判定された場合に断続制御を行うので、ステレオ再生の安定化を実現することができる。 According to the present invention, real-time control is performed when it is determined that the position fixing time is longer than a predetermined time, and intermittent when it is determined that the position fixing time is equal to or less than a predetermined time. Since control is performed, stabilization of stereo reproduction can be realized.
 本発明の目的、特徴、態様及び利点は、以下の詳細な説明と添付図面とによって、より明白となる。 The object, features, aspects and advantages of the present invention will be made clearer by the following detailed description and accompanying drawings.
実施の形態1に係る音声出力制御装置の構成を示すブロック図である。It is a block diagram which shows the structure of the audio output control apparatus which concerns on Embodiment 1. FIG. 実施の形態2に係る車両装置及びその周辺を示す模式図である。It is a schematic diagram which shows the vehicle apparatus which concerns on Embodiment 2, and its periphery. 実施の形態2に係る車両装置の構成を示すブロック図である。It is a block diagram which shows the structure of the vehicle apparatus which concerns on Embodiment 2. 実施の形態2に係る車両装置の動作を示すフローチャートである。It is a flowchart which shows the operation of the vehicle apparatus which concerns on Embodiment 2. その他の変形例に係る音声出力制御装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware composition of the audio output control device which concerns on other modification. その他の変形例に係る音声出力制御装置のハードウェア構成を示すブロック図である。It is a block diagram which shows the hardware composition of the audio output control device which concerns on other modification. その他の変形例に係るサーバの構成を示すブロック図である。It is a block diagram which shows the structure of the server which concerns on other modification. その他の変形例に係る通信端末の構成を示すブロック図である。It is a block diagram which shows the structure of the communication terminal which concerns on other modification.
 <実施の形態1>
 図1は、本発明の実施の形態1に係る音声出力制御装置1の構成を示すブロック図である。以下、音声出力制御装置1が搭載され、着目の対象となる車両を「自車両」と記載して説明する。
<Embodiment 1>
FIG. 1 is a block diagram showing a configuration of an audio output control device 1 according to a first embodiment of the present invention. Hereinafter, the vehicle on which the voice output control device 1 is mounted and which is the subject of attention will be described as “own vehicle”.
 図1の音声出力制御装置1は、自車両に搭載された複数の音声出力装置51と通信可能に無線または有線で接続されている。複数の音声出力装置51は、例えば、自車両の室内に設けられた複数のスピーカである。 The voice output control device 1 of FIG. 1 is wirelessly or wiredly connected to a plurality of voice output devices 51 mounted on the own vehicle so as to be able to communicate with each other. The plurality of audio output devices 51 are, for example, a plurality of speakers provided in the interior of the own vehicle.
 音声出力制御装置1は、取得部11と、制御部12とを備える。 The voice output control device 1 includes an acquisition unit 11 and a control unit 12.
 取得部11は、自車両の搭乗者の顔の位置を取得する。以下、自車両の搭乗者は、自車両の運転者であるものとして説明する。取得部11には、例えば、カメラで撮像された運転者の画像から顔の位置を取得する画像解析装置、またはそのインターフェースなどが適用される。 The acquisition unit 11 acquires the position of the face of the passenger of the own vehicle. Hereinafter, the passenger of the own vehicle will be described as being the driver of the own vehicle. For example, an image analysis device that acquires the position of the face from the image of the driver captured by the camera, an interface thereof, or the like is applied to the acquisition unit 11.
 制御部12は、複数の音声出力装置51に対してリアルタイム制御及び断続制御を選択的に行う。リアルタイム制御では、制御部12は、取得部11で取得された顔の位置に基づいて、複数の音声出力装置51をリアルタイムで制御する。断続制御では、制御部12は、取得部11で取得された顔の位置に基づいて、複数の音声出力装置51をリアルタイム制御よりも断続的に制御する。つまり、断続制御では、複数の音声出力装置51を制御する時間の間隔が、リアルタイム制御よりも長くなる。以上のようなリアルタイム制御及び断続制御によって、各音声出力装置51の音声出力タイミング及び音量に関するパラメータが調整される。その結果、例えば、顔の位置に対して音場や音像を形成するステレオ再生が行われる。 The control unit 12 selectively performs real-time control and intermittent control for the plurality of audio output devices 51. In real-time control, the control unit 12 controls a plurality of voice output devices 51 in real time based on the position of the face acquired by the acquisition unit 11. In the intermittent control, the control unit 12 controls the plurality of voice output devices 51 intermittently rather than the real-time control based on the position of the face acquired by the acquisition unit 11. That is, in the intermittent control, the time interval for controlling the plurality of audio output devices 51 is longer than that in the real-time control. The parameters related to the audio output timing and volume of each audio output device 51 are adjusted by the above real-time control and intermittent control. As a result, for example, stereo reproduction is performed in which a sound field or a sound image is formed with respect to the position of the face.
 なお、本実施の形態1では、複数の音声出力装置51の位置はデフォルトの位置であるものとしてリアルタイム制御及び断続制御が選択的に行われるものとする。ただし、後述する実施の形態2のように、複数の音声出力装置51の位置はデフォルトの位置に限ったものではない。 In the first embodiment, it is assumed that the positions of the plurality of audio output devices 51 are the default positions, and real-time control and intermittent control are selectively performed. However, as in the second embodiment described later, the positions of the plurality of audio output devices 51 are not limited to the default positions.
 ここで、制御部12は、取得部11で取得された顔の位置の単位時間当たりの変化が、予め定められた値以下である場合に、顔の位置が固定されていると判定する。なお、単位時間は例えば0.1秒であり、予め定められた値は例えば実距離で1cmである。 Here, the control unit 12 determines that the face position is fixed when the change of the face position acquired by the acquisition unit 11 per unit time is equal to or less than a predetermined value. The unit time is, for example, 0.1 seconds, and the predetermined value is, for example, 1 cm in the actual distance.
 また、制御部12は、固定されているという判定が継続している時間を位置固定時間として求め、当該位置固定時間が、予め定められた時間よりも大きいか否かを判定する。なお、予め定められた時間は例えば3秒である。 Further, the control unit 12 obtains the time during which the determination that the position is fixed continues as the position fixing time, and determines whether or not the position fixing time is larger than the predetermined time. The predetermined time is, for example, 3 seconds.
 制御部12は、位置固定時間が予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が予め定められた時間以下であると判定された場合に断続制御を行う。 The control unit 12 performs real-time control when it is determined that the position fixing time is longer than a predetermined time, and performs intermittent control when it is determined that the position fixing time is equal to or less than a predetermined time. ..
 <実施の形態1のまとめ>
 以上のような本実施の形態1に係る音声出力制御装置1によれば、位置固定時間が、予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が、予め定められた時間以下であると判定された場合に断続制御を行う。このような構成によれば、常にリアルタイム制御を行うのではなく、断続制御を適宜行うことができるので、ステレオ再生の安定化を実現することができる。この結果、運転者以外の自車両の搭乗者に、違和感ひいては不快感が生じることを抑制することができる。また、制御部12の処理負荷の低減化も期待できる。
<Summary of Embodiment 1>
According to the voice output control device 1 according to the first embodiment as described above, when it is determined that the position fixing time is longer than the predetermined time, real-time control is performed and the position fixing time is set in advance. Intermittent control is performed when it is determined that the time is less than or equal to the specified time. According to such a configuration, it is possible to appropriately perform intermittent control instead of always performing real-time control, so that stabilization of stereo reproduction can be realized. As a result, it is possible to prevent the passengers of the own vehicle other than the driver from feeling uncomfortable and uncomfortable. In addition, the processing load of the control unit 12 can be expected to be reduced.
 <実施の形態2>
 図2は、本発明の実施の形態2に係る車両装置1a及びその周辺を示す模式図である。以下、本実施の形態2に係る構成要素のうち、上述の構成要素と同じまたは類似する構成要素については同じまたは類似する参照符号を付し、異なる構成要素について主に説明する。
<Embodiment 2>
FIG. 2 is a schematic view showing the vehicle device 1a and its surroundings according to the second embodiment of the present invention. Hereinafter, among the components according to the second embodiment, the same or similar components as those described above will be designated by the same or similar reference numerals, and different components will be mainly described.
 車両装置1aと接続されたカメラ53は、自車両の室内41の画像を撮像する。このカメラ53は、撮影される画像に自車両の運転者46及び複数の音声出力装置51が含まれるように自車両に配設される。 The camera 53 connected to the vehicle device 1a captures an image of the interior 41 of the own vehicle. The camera 53 is arranged in the own vehicle so that the captured image includes the driver 46 of the own vehicle and the plurality of audio output devices 51.
 車両装置1aは、実施の形態1で説明した音声出力制御装置1の機能を有するだけでなく、搭乗者の意識状態及び健康状態をモニタして運転を支援するDMS(Driver Monitoring System)機能を有している。車両装置1aは、画像解析部11aと、固定判定部12aと、変化判定部12bと、パラメータ変更部12cとを備える。なお、画像解析部11aは、図1の取得部11の概念に含まれる。また、固定判定部12a、変化判定部12b、及び、パラメータ変更部12cは、図1の制御部12の概念に含まれる。 The vehicle device 1a not only has the function of the voice output control device 1 described in the first embodiment, but also has a DMS (Driver Monitoring System) function that monitors the consciousness state and health state of the passenger and supports driving. are doing. The vehicle device 1a includes an image analysis unit 11a, a fixed determination unit 12a, a change determination unit 12b, and a parameter change unit 12c. The image analysis unit 11a is included in the concept of the acquisition unit 11 in FIG. Further, the fixed determination unit 12a, the change determination unit 12b, and the parameter change unit 12c are included in the concept of the control unit 12 of FIG.
 画像解析部11aは、カメラ53で撮像された画像に解析を行うことによって、運転者46の顔の位置、及び、複数の音声出力装置51の位置を取得する。なお、取得された顔の位置は、以下で説明する複数の音声出力装置51の制御に用いられるだけでなく、車両装置1aのDMS機能などにも用いられる。 The image analysis unit 11a acquires the position of the face of the driver 46 and the positions of the plurality of audio output devices 51 by analyzing the image captured by the camera 53. The acquired face position is used not only for controlling the plurality of voice output devices 51 described below, but also for the DMS function of the vehicle device 1a and the like.
 図3は、本実施の形態2に係る車両装置1aの構成を示すブロック図である。 FIG. 3 is a block diagram showing the configuration of the vehicle device 1a according to the second embodiment.
 固定判定部12aは、実施の形態1で説明した制御部12と同様に、位置固定時間が予め定められた時間よりも大きいか否かを判定する。変化判定部12bは、位置固定時間が予め定められた時間以下であるか否かを判定する。なお、固定判定部12aの上記判定の結果と、変化判定部12bの上記判定の結果とは真逆の関係となることが想定されるため、上記二つの判定のうち一方の判定のみが行われてもよい。以下では、説明を簡単にするため、固定判定部12aが上記判定を行う、つまり位置固定時間が予め定められた時間よりも大きいか否かについて判定を行うものとして説明する。 The fixing determination unit 12a determines whether or not the position fixing time is longer than a predetermined time, similarly to the control unit 12 described in the first embodiment. The change determination unit 12b determines whether or not the position fixing time is equal to or less than a predetermined time. Since it is assumed that the result of the above determination of the fixed determination unit 12a and the result of the above determination of the change determination unit 12b have an exact opposite relationship, only one of the above two determinations is determined. You may. In the following, for the sake of simplicity, the fixed determination unit 12a will perform the above determination, that is, determine whether or not the position fixing time is larger than a predetermined time.
 固定判定部12aは、上記判定だけでなく、実施の形態1で説明したリアルタイム制御の処理負荷と、予め定められた制御の処理負荷との和(以下「負荷和」と記す)が、予め定められた処理負荷よりも小さいか否かを判定する。ここで、予め定められた制御は、例えばDMS機能の制御であり、予め定められた処理負荷は、例えば車両装置1aの最大処理負荷である。この判定は、固定判定部12aではなく変化判定部12bで行われてもよい。 In the fixed determination unit 12a, not only the above determination but also the sum of the real-time control processing load described in the first embodiment and the predetermined control processing load (hereinafter referred to as “load sum”) is predetermined. Determine if it is less than the processing load received. Here, the predetermined control is, for example, the control of the DMS function, and the predetermined processing load is, for example, the maximum processing load of the vehicle device 1a. This determination may be performed by the change determination unit 12b instead of the fixed determination unit 12a.
 変化判定部12bは、画像解析部11aで取得された顔の位置が周期的に変動するか否かを判定する。この周期は、位置固定時間と比較される予め定められた時間よりも短い時間に設定される。 The change determination unit 12b determines whether or not the position of the face acquired by the image analysis unit 11a fluctuates periodically. This cycle is set to a time shorter than a predetermined time compared to the fixed position time.
 パラメータ変更部12cは、実施の形態1で説明した制御部12と同様に、位置固定時間が予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が予め定められた時間以下であると判定された場合に断続制御を行う。この際、パラメータ変更部12cは、画像解析部11aで取得された複数の音声出力装置51の位置を、リアルタイム制御及び断続制御に用いる。 Similar to the control unit 12 described in the first embodiment, the parameter changing unit 12c performs real-time control when it is determined that the position fixing time is longer than the predetermined time, and the position fixing time is predetermined. Intermittent control is performed when it is determined that the time is less than the specified time. At this time, the parameter changing unit 12c uses the positions of the plurality of audio output devices 51 acquired by the image analysis unit 11a for real-time control and intermittent control.
 具体的には、パラメータ変更部12cは、画像解析部11aで取得された顔の位置及び各音声出力装置51の位置に基づいて、顔と各音声出力装置51との間の距離を求め、当該距離に基づいて各音声出力装置51のパラメータを変更する。例えば、当該距離が長いほど、音声出力装置51の音声出力タイミングが早められたり、その音量が高められたりする。 Specifically, the parameter changing unit 12c obtains the distance between the face and each voice output device 51 based on the position of the face acquired by the image analysis unit 11a and the position of each voice output device 51. The parameters of each audio output device 51 are changed based on the distance. For example, the longer the distance, the earlier the audio output timing of the audio output device 51 or the higher the volume thereof.
 また、パラメータ変更部12cは、固定判定部12aによって負荷和が予め定められた処理負荷よりも小さいと判定された場合には、位置固定時間が予め定められた時間よりも大きいか否かについての判定結果に関わらず、リアルタイム制御を行う。 Further, when the parameter changing unit 12c determines that the load sum is smaller than the predetermined processing load by the fixed determination unit 12a, the parameter changing unit 12c determines whether or not the position fixing time is larger than the predetermined time. Real-time control is performed regardless of the judgment result.
 また、パラメータ変更部12cは、リアルタイム制御に用いられるパラメータの変化が予め定められた閾値よりも大きい場合には、位置固定時間が予め定められた時間よりも大きいか否かについての判定結果、及び、上記和が予め定められた処理負荷よりも小さいか否かについての判定結果に関わらず、断続制御を行う。 Further, the parameter changing unit 12c determines whether or not the position fixing time is larger than the predetermined time when the change of the parameter used for the real-time control is larger than the predetermined threshold value, and , Intermittent control is performed regardless of the determination result as to whether or not the sum is smaller than the predetermined processing load.
 さらに、パラメータ変更部12cは、変化判定部12bによって顔の位置が周期的に変動すると判定された場合に、顔の変動の中心位置を断続制御に用いる。つまり、パラメータ変更部12cは、顔の変動の中心位置に対して断続制御を行う。なお、顔の変動の中心位置は、顔の周期的な変動を単振動とみなしたときの振動の中心位置に相当する。 Further, the parameter changing unit 12c uses the center position of the facial fluctuation for intermittent control when the change determining unit 12b determines that the position of the face fluctuates periodically. That is, the parameter changing unit 12c performs intermittent control with respect to the center position of the fluctuation of the face. The center position of the fluctuation of the face corresponds to the center position of the vibration when the periodic fluctuation of the face is regarded as a simple vibration.
 <動作>
 図4は、本実施の形態2に係る車両装置1aの動作を示すフローチャートである。この動作は、例えば、カメラ53から車両装置1aへの定期的な入力ごとに行われる。
<Operation>
FIG. 4 is a flowchart showing the operation of the vehicle device 1a according to the second embodiment. This operation is performed, for example, for each periodic input from the camera 53 to the vehicle device 1a.
 まずステップS1にて、画像解析部11aは、カメラ53で撮像された画像に解析を行うことによって、運転者46の顔の位置を取得する。 First, in step S1, the image analysis unit 11a acquires the position of the driver 46's face by analyzing the image captured by the camera 53.
 ステップS2にて、固定判定部12aは、前回取得された画像の顔の位置と、今回取得された画像の顔の位置とが完全に同じであったか否かを判定する。つまり、固定判定部12aは、顔の位置が完全に固定されているか否かを判定する。完全に同じであると判定された場合には図4の動作が終了し、完全に同じではないと判定された場合にはステップS3に処理が進む。 In step S2, the fixed determination unit 12a determines whether or not the position of the face of the previously acquired image and the position of the face of the image acquired this time are completely the same. That is, the fixed determination unit 12a determines whether or not the position of the face is completely fixed. If it is determined that they are completely the same, the operation of FIG. 4 ends, and if it is determined that they are not completely the same, the process proceeds to step S3.
 ステップS3にて、固定判定部12aは、負荷和が予め定められた処理負荷よりも小さいか否かを判定する。負荷和が予め定められた処理負荷よりも小さいと判定された場合には処理がステップS5に進み、負荷和が予め定められた処理負荷以上であると判定された場合には処理がステップS4に進む。 In step S3, the fixed determination unit 12a determines whether or not the sum of loads is smaller than the predetermined processing load. If it is determined that the sum of loads is smaller than the predetermined processing load, the process proceeds to step S5, and if it is determined that the sum of loads is equal to or greater than the predetermined processing load, the process proceeds to step S4. move on.
 ステップS4にて、固定判定部12aは、位置固定時間が予め定められた時間よりも大きいか否かを判定する。位置固定時間が予め定められた時間よりも大きいと判定された場合には処理がステップS5に進み、位置固定時間が予め定められた時間以下であると判定された場合には処理がステップS7に進む。 In step S4, the fixing determination unit 12a determines whether or not the position fixing time is longer than a predetermined time. If it is determined that the fixed position time is longer than the predetermined time, the process proceeds to step S5, and if it is determined that the fixed position time is equal to or less than the predetermined time, the process proceeds to step S7. move on.
 ステップS5にて、パラメータ変更部12cは、顔の位置に基づいて、リアルタイム制御に用いられるパラメータを変更する。なお、パラメータの変更が複数の音声出力装置51に反映される時点はステップS9であり、ステップS5の時点ではパラメータの変更は複数の音声出力装置51に反映されない。 In step S5, the parameter changing unit 12c changes the parameters used for real-time control based on the position of the face. The time when the parameter change is reflected in the plurality of audio output devices 51 is in step S9, and the parameter change is not reflected in the plurality of audio output devices 51 at the time of step S5.
 ステップS6にて、パラメータ変更部12cは、リアルタイム制御に用いられるパラメータの変化が予め定められた閾値よりも大きいか否かを判定する。パラメータの変化が予め定められた閾値よりも大きいと判定された場合には処理がステップS7に進み、パラメータの変化が予め定められた閾値以下であると判定された場合には処理がステップS9に進む。 In step S6, the parameter changing unit 12c determines whether or not the change in the parameter used for real-time control is larger than a predetermined threshold value. If it is determined that the parameter change is larger than the predetermined threshold value, the process proceeds to step S7, and if it is determined that the parameter change is equal to or less than the predetermined threshold value, the process proceeds to step S9. move on.
 ステップS7にて、変化判定部12bは、顔の位置が周期的に変動するか否かを判定する。顔の位置が周期的に変動すると判定された場合には処理がステップS8に進み、顔の位置が周期的に変動しないと判定された場合には図4の動作が終了する。 In step S7, the change determination unit 12b determines whether or not the position of the face fluctuates periodically. If it is determined that the position of the face fluctuates periodically, the process proceeds to step S8, and if it is determined that the position of the face does not fluctuate periodically, the operation of FIG. 4 ends.
 ステップS8にて、パラメータ変更部12cは、顔の変動の中心位置に基づいて、断続制御に用いられるパラメータを変更する。なお、ステップS8のパラメータの変更は、ステップS6のパラメータの変更よりも断続的に行われる。また、パラメータの変更が複数の音声出力装置51に反映される時点はステップS9であり、ステップS8の時点ではパラメータの変更は複数の音声出力装置51に反映されない。その後、ステップS9に処理が進む。 In step S8, the parameter changing unit 12c changes the parameters used for intermittent control based on the center position of the facial fluctuation. The parameter change in step S8 is performed intermittently compared to the parameter change in step S6. Further, the time point at which the parameter change is reflected in the plurality of audio output devices 51 is step S9, and at the time of step S8, the parameter change is not reflected in the plurality of audio output devices 51. After that, the process proceeds to step S9.
 ステップS9にて、パラメータ変更部12cは、パラメータを複数の音声出力装置51に出力する。これにより、ステップS8を経てステップS9が行われた場合には断続制御が行われ、ステップS8を経ずにステップS9が行われた場合にはリアルタイム制御が行われる。その後、図4の動作が終了する。 In step S9, the parameter changing unit 12c outputs the parameters to the plurality of audio output devices 51. As a result, intermittent control is performed when step S9 is performed through step S8, and real-time control is performed when step S9 is performed without going through step S8. After that, the operation of FIG. 4 ends.
 <実施の形態2のまとめ>
 以上のような本実施の形態2に係る車両装置1aによれば、負荷和が予め定められた処理負荷よりも小さいと判定された場合には、位置固定時間が予め定められた時間よりも大きいか否かについての判定結果に関わらず、リアルタイム制御を行う。このような構成によれば、車両装置1aの処理負荷が逼迫することを抑制することができる。
<Summary of Embodiment 2>
According to the vehicle device 1a according to the second embodiment as described above, when it is determined that the sum of loads is smaller than the predetermined processing load, the position fixing time is longer than the predetermined time. Real-time control is performed regardless of the judgment result as to whether or not. According to such a configuration, it is possible to prevent the processing load of the vehicle device 1a from becoming tight.
 また本実施の形態2によれば、リアルタイム制御に用いられるパラメータの変化が予め定められた閾値よりも大きい場合には、位置固定時間が予め定められた時間よりも大きいか否かについての判定結果に関わらず、断続制御を行う。このような構成によれば、ステレオ再生の安定化、ひいては制御部12の処理負荷の低減を実現することができる。 Further, according to the second embodiment, when the change of the parameter used for the real-time control is larger than the predetermined threshold value, the determination result as to whether or not the position fixing time is larger than the predetermined time. Intermittent control is performed regardless. According to such a configuration, it is possible to stabilize the stereo reproduction and reduce the processing load of the control unit 12.
 ここで、例えば顔が左右に周期的に揺れる場合には、ステレオ再生の対象となるステレオ対象位置も左右に揺れる。これらの位相が偶然逆になった場合、つまり、顔が左側に位置する場合にステレオ対象位置は右側に位置し、顔が右側に位置する場合にステレオ対象位置は左側に位置する場合には、そのずれが比較的大きくなる。これに対して、本実施の形態2によれば、顔の位置が周期的に変動すると判定された場合に、顔の変動の中心位置を断続制御に用いる。このような構成によれば、上記のような顔の位置とステレオ対象位置との間のずれを低減することができる。 Here, for example, when the face sways periodically from side to side, the stereo target position to be reproduced in stereo also sways from side to side. If these phases are accidentally reversed, that is, if the face is on the left side, the stereo target position is on the right side, and if the face is on the right side, the stereo target position is on the left side. The deviation becomes relatively large. On the other hand, according to the second embodiment, when it is determined that the position of the face fluctuates periodically, the central position of the fluctuation of the face is used for intermittent control. According to such a configuration, it is possible to reduce the above-mentioned deviation between the face position and the stereo target position.
 また本実施の形態2によれば、複数の音声出力装置51の位置を、リアルタイム制御及び断続制御に用いる。このような構成によれば、顔の位置とステレオ対象位置との間のずれを低減することができる。 Further, according to the second embodiment, the positions of the plurality of audio output devices 51 are used for real-time control and intermittent control. According to such a configuration, it is possible to reduce the deviation between the position of the face and the position of the stereo target.
 <変形例1>
 実施の形態1及び実施の形態2では、顔の位置が取得される搭乗者は、運転者であるものとして説明したが、同乗者であってもよいし、運転者及び同乗者の両方であってもよい。なお、顔の位置が取得される搭乗者が、運転者及び同乗者の両方である場合には、一方に対する音声が他方へのノイズとならないように、ノイズキャンセラが用いられることが好ましい。
<Modification example 1>
In the first embodiment and the second embodiment, the passenger whose face position is acquired is described as being a driver, but may be a passenger, or both a driver and a passenger. You may. When the passenger whose face position is acquired is both the driver and the passenger, it is preferable to use a noise canceller so that the voice for one does not become noise for the other.
 <変形例2>
 実施の形態2では、パラメータ変更部12cは、顔の位置が周期的に変動すると判定された場合に、顔の変動の中心位置を断続制御に用いた。しかしながらこれに限ったものではなく、パラメータ変更部12cは、画像解析部11aで取得された顔の位置の頻度を求め、当該頻度が最も高い顔の位置を断続制御に用いてもよい。このような構成であっても、顔の位置とステレオ対象位置との間のずれを低減することができる。
<Modification 2>
In the second embodiment, the parameter changing unit 12c used the central position of the facial fluctuation for intermittent control when it was determined that the facial position fluctuated periodically. However, the present invention is not limited to this, and the parameter changing unit 12c may obtain the frequency of the face position acquired by the image analysis unit 11a and use the face position having the highest frequency for intermittent control. Even with such a configuration, it is possible to reduce the deviation between the face position and the stereo target position.
 <変形例3>
 実施の形態2では、カメラ53は、撮影される画像に複数の音声出力装置51が含まれるように自車両に配設され、画像解析部11aは、カメラ53で撮像された画像に解析を行うことによって、複数の音声出力装置51の位置を取得した。
<Modification example 3>
In the second embodiment, the camera 53 is arranged in the own vehicle so that the captured image includes a plurality of audio output devices 51, and the image analysis unit 11a analyzes the image captured by the camera 53. As a result, the positions of the plurality of audio output devices 51 were acquired.
 しかしながらこれに限ったものではなく、画像解析部11aは、自車両の室内41の画像に基づいて自車両の車種及びグレードを識別してもよい。そして、画像解析部11aは、車両装置1aの図示しないメモリに予め対応付けられた車種及びグレードと複数の音声出力装置51の位置との対応関係に従い、上記識別結果に基づいて複数の音声出力装置51の位置を取得してもよい。このような構成によれば、カメラ53の画像に複数の音声出力装置51が含まれなくても、複数の音声出力装置51の位置を取得することができる。 However, the present invention is not limited to this, and the image analysis unit 11a may identify the vehicle type and grade of the own vehicle based on the image of the interior 41 of the own vehicle. Then, the image analysis unit 11a follows a correspondence relationship between the vehicle type and grade previously associated with the memory (not shown) of the vehicle device 1a and the positions of the plurality of audio output devices 51, and the image analysis unit 11a has a plurality of audio output devices based on the identification result. The position of 51 may be acquired. According to such a configuration, the positions of the plurality of audio output devices 51 can be acquired even if the image of the camera 53 does not include the plurality of audio output devices 51.
 <その他の変形例>
 上述した取得部11及び制御部12を、以下「取得部11等」と記す。取得部11等は、図5に示す処理回路81により実現される。すなわち、処理回路81は、搭乗者の顔の位置を取得する取得部11と、取得された顔の位置が固定されている時間である位置固定時間が予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が予め定められた時間以下であると判定された場合に断続制御を行う制御部12と、を備える。処理回路81には、専用のハードウェアが適用されてもよいし、メモリに格納されるプログラムを実行するプロセッサが適用されてもよい。プロセッサには、例えば、中央処理装置、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、DSP(Digital Signal Processor)などが該当する。
<Other variants>
The above-mentioned acquisition unit 11 and control unit 12 are hereinafter referred to as “acquisition unit 11 and the like”. The acquisition unit 11 and the like are realized by the processing circuit 81 shown in FIG. That is, it is determined that the processing circuit 81 has the acquisition unit 11 that acquires the position of the occupant's face, and the position fixing time, which is the time during which the acquired face position is fixed, is larger than a predetermined time. A control unit 12 is provided which performs real-time control in such a case and performs intermittent control when it is determined that the position fixing time is equal to or less than a predetermined time. Dedicated hardware may be applied to the processing circuit 81, or a processor that executes a program stored in the memory may be applied. Examples of the processor include a central processing unit, a processing unit, an arithmetic unit, a microprocessor, a microcomputer, a DSP (Digital Signal Processor), and the like.
 処理回路81が専用のハードウェアである場合、処理回路81は、例えば、単一回路、複合回路、プログラム化したプロセッサ、並列プログラム化したプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field Programmable Gate Array)、またはこれらを組み合わせたものが該当する。取得部11等の各部の機能それぞれは、処理回路を分散させた回路で実現されてもよいし、各部の機能をまとめて一つの処理回路で実現されてもよい。 When the processing circuit 81 is dedicated hardware, the processing circuit 81 may be, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), or an FPGA (Field Programmable Gate). Array), or a combination of these. Each of the functions of each part such as the acquisition unit 11 may be realized by a circuit in which the processing circuits are dispersed, or the functions of each part may be collectively realized by one processing circuit.
 処理回路81がプロセッサである場合、取得部11等の機能は、ソフトウェア等との組み合わせにより実現される。なお、ソフトウェア等には、例えば、ソフトウェア、ファームウェア、または、ソフトウェア及びファームウェアが該当する。ソフトウェア等はプログラムとして記述され、メモリに格納される。図6に示すように、処理回路81に適用されるプロセッサ82は、メモリ83に記憶されたプログラムを読み出して実行することにより、各部の機能を実現する。すなわち、音声出力制御装置1は、処理回路81により実行されるときに、搭乗者の顔の位置を取得するステップと、取得された顔の位置が固定されている時間である位置固定時間が予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が予め定められた時間以下であると判定された場合に断続制御を行うステップと、が結果的に実行されることになるプログラムを格納するためのメモリ83を備える。換言すれば、このプログラムは、取得部11等の手順や方法をコンピュータに実行させるものであるともいえる。ここで、メモリ83は、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(Electrically Erasable Programmable Read Only Memory)などの、不揮発性または揮発性の半導体メモリ、HDD(Hard Disk Drive)、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク、DVD(Digital Versatile Disc)、そのドライブ装置等、または、今後使用されるあらゆる記憶媒体であってもよい。 When the processing circuit 81 is a processor, the functions of the acquisition unit 11 and the like are realized by combining with software and the like. The software and the like correspond to, for example, software, firmware, or software and firmware. Software and the like are described as programs and stored in memory. As shown in FIG. 6, the processor 82 applied to the processing circuit 81 realizes the functions of each part by reading and executing the program stored in the memory 83. That is, when the voice output control device 1 is executed by the processing circuit 81, the step of acquiring the position of the occupant's face and the position fixing time, which is the time during which the acquired face position is fixed, are predetermined. As a result, a step of performing real-time control when it is determined that the time is longer than the predetermined time and performing intermittent control when it is determined that the position fixing time is less than or equal to the predetermined time is executed. A memory 83 for storing a program to be used is provided. In other words, it can be said that this program causes the computer to execute the procedure or method of the acquisition unit 11 or the like. Here, the memory 83 is a non-volatile or non-volatile memory such as a RAM (RandomAccessMemory), a ROM (ReadOnlyMemory), a flash memory, an EPROM (ErasableProgrammableReadOnlyMemory), and an EEPROM (ElectricallyErasableProgrammableReadOnlyMemory). Volatile semiconductor memory, HDD (Hard Disk Drive), magnetic disk, flexible disk, optical disk, compact disk, mini disk, DVD (Digital Versatile Disc), its drive device, etc., or any storage medium that will be used in the future. You may.
 以上、取得部11等の各機能が、ハードウェア及びソフトウェア等のいずれか一方で実現される構成について説明した。しかしこれに限ったものではなく、取得部11等の一部を専用のハードウェアで実現し、別の一部をソフトウェア等で実現する構成であってもよい。例えば、取得部11については専用のハードウェアとしての処理回路81、インターフェース及びレシーバなどでその機能を実現し、それ以外についてはプロセッサ82としての処理回路81がメモリ83に格納されたプログラムを読み出して実行することによってその機能を実現することが可能である。 The configuration in which each function of the acquisition unit 11 and the like is realized by either hardware or software has been described above. However, the present invention is not limited to this, and a configuration may be configured in which a part of the acquisition unit 11 or the like is realized by dedicated hardware and another part is realized by software or the like. For example, the acquisition unit 11 realizes its function by a processing circuit 81 as dedicated hardware, an interface, a receiver, and the like, and other than that, the processing circuit 81 as a processor 82 reads a program stored in the memory 83. It is possible to realize the function by executing it.
 以上のように、処理回路81は、ハードウェア、ソフトウェア等、またはこれらの組み合わせによって、上述の各機能を実現することができる。 As described above, the processing circuit 81 can realize each of the above-mentioned functions by hardware, software, or a combination thereof.
 また、以上で説明した音声出力制御装置1は、PND(Portable Navigation Device)、ナビゲーション装置及びDMS装置などの車両装置と、携帯電話、スマートフォン及びタブレットなどの携帯端末を含む通信端末と、車両装置及び通信端末の少なくとも1つにインストールされるアプリケーションの機能と、サーバとを適宜に組み合わせてシステムとして構築される音声出力制御システムにも適用することができる。この場合、以上で説明した音声出力制御装置1の各機能あるいは各構成要素は、前記システムを構築する各機器に分散して配置されてもよいし、いずれかの機器に集中して配置されてもよい。 Further, the voice output control device 1 described above includes a vehicle device such as a PND (Portable Navigation Device), a navigation device, and a DMS device, a communication terminal including a mobile terminal such as a mobile phone, a smartphone, and a tablet, a vehicle device, and the vehicle device. It can also be applied to a voice output control system constructed as a system by appropriately combining a function of an application installed on at least one of communication terminals and a server. In this case, each function or each component of the audio output control device 1 described above may be distributed and arranged in each device for constructing the system, or may be arranged in a concentrated manner in any of the devices. May be good.
 図7は、本変形例に係るサーバ91の構成を示すブロック図である。図7のサーバ91は、通信部91aと制御部91bとを備えており、車両92のナビゲーション装置及びDMS装置などの車両装置93と無線通信を行うことが可能となっている。 FIG. 7 is a block diagram showing the configuration of the server 91 according to this modification. The server 91 of FIG. 7 includes a communication unit 91a and a control unit 91b, and can perform wireless communication with a vehicle device 93 such as a navigation device and a DMS device of the vehicle 92.
 取得部である通信部91aは、車両装置93と無線通信を行うことにより、車両装置93で取得された搭乗者の顔の位置を受信する。 The communication unit 91a, which is the acquisition unit, receives the position of the passenger's face acquired by the vehicle device 93 by performing wireless communication with the vehicle device 93.
 制御部91bは、サーバ91の図示しないプロセッサなどが、サーバ91の図示しないメモリに記憶されたプログラムを実行することにより、図1の制御部12と同様の機能を有している。つまり、制御部91bは、受信された顔の位置が固定されている時間である位置固定時間が予め定められた時間よりも大きいと判定された場合にリアルタイム制御を行い、位置固定時間が予め定められた時間以下であると判定された場合に断続制御を行うための制御信号を生成する。そして、通信部91aは、制御部91bで生成された制御信号を車両装置93に送信する。このように構成されたサーバ91によれば、実施の形態1で説明した音声出力制御装置1と同様の効果を得ることができる。 The control unit 91b has the same function as the control unit 12 of FIG. 1 when a processor (not shown) of the server 91 or the like executes a program stored in a memory (not shown) of the server 91. That is, the control unit 91b performs real-time control when it is determined that the position fixing time, which is the time when the received face position is fixed, is longer than the predetermined time, and the position fixing time is predetermined. A control signal for performing intermittent control is generated when it is determined that the time is less than or equal to the specified time. Then, the communication unit 91a transmits the control signal generated by the control unit 91b to the vehicle device 93. According to the server 91 configured in this way, the same effect as that of the voice output control device 1 described in the first embodiment can be obtained.
 図8は、本変形例に係る通信端末96の構成を示すブロック図である。図8の通信端末96は、通信部91aと同様の通信部96aと、制御部91bと同様の制御部96bとを備えており、車両97の車両装置98と無線通信を行うことが可能となっている。なお、通信端末96には、例えば車両97の運転者が携帯する携帯電話、スマートフォン、及びタブレットなどの携帯端末が適用される。このように構成された通信端末96によれば、実施の形態1で説明した音声出力制御装置1と同様の効果を得ることができる。 FIG. 8 is a block diagram showing the configuration of the communication terminal 96 according to this modification. The communication terminal 96 of FIG. 8 includes a communication unit 96a similar to the communication unit 91a and a control unit 96b similar to the control unit 91b, and can perform wireless communication with the vehicle device 98 of the vehicle 97. ing. A mobile terminal such as a mobile phone, a smartphone, or a tablet carried by the driver of the vehicle 97 is applied to the communication terminal 96, for example. According to the communication terminal 96 configured in this way, the same effect as that of the voice output control device 1 described in the first embodiment can be obtained.
 なお、本発明は、その発明の範囲内において、各実施の形態及び各変形例を自由に組み合わせたり、各実施の形態及び各変形例を適宜、変形、省略したりすることが可能である。 It should be noted that, within the scope of the present invention, each embodiment and each modification can be freely combined, and each embodiment and each modification can be appropriately modified or omitted.
 本発明は詳細に説明されたが、上記した説明は、すべての態様において、例示であって、本発明がそれに限定されるものではない。例示されていない無数の変形例が、本発明の範囲から外れることなく想定され得るものと解される。 Although the present invention has been described in detail, the above description is exemplary in all embodiments and the present invention is not limited thereto. It is understood that innumerable variations not illustrated can be assumed without departing from the scope of the present invention.
 1 音声出力制御装置、11 取得部、12 制御部、41 室内、46 運転者、51 音声出力装置。 1 voice output control device, 11 acquisition unit, 12 control unit, 41 indoor, 46 driver, 51 voice output device.

Claims (9)

  1.  車両の搭乗者の顔の位置に基づいて、前記車両に搭載された複数の音声出力装置をリアルタイムで制御するリアルタイム制御を行う音声出力制御装置であって、
     前記搭乗者の顔の位置を取得する取得部と、
     前記取得部で取得された前記顔の位置が固定されている時間である位置固定時間が、予め定められた時間よりも大きいと判定された場合に、前記リアルタイム制御を行い、前記位置固定時間が、前記予め定められた時間以下であると判定された場合に、前記取得部で取得された前記顔の位置に基づいて、前記複数の音声出力装置を前記リアルタイム制御よりも断続的に制御する断続制御を行う制御部と
    を備える、音声出力制御装置。
    It is a voice output control device that performs real-time control for controlling a plurality of voice output devices mounted on the vehicle in real time based on the position of the face of a passenger in the vehicle.
    An acquisition unit that acquires the position of the passenger's face,
    When it is determined that the position fixing time, which is the time during which the face position acquired by the acquisition unit is fixed, is longer than the predetermined time, the real-time control is performed and the position fixing time is performed. When it is determined that the time is equal to or less than the predetermined time, the plurality of audio output devices are intermittently controlled more than the real-time control based on the position of the face acquired by the acquisition unit. An audio output control device including a control unit that performs control.
  2.  請求項1に記載の音声出力制御装置であって、
     前記制御部は、
     前記取得部で取得された前記顔の位置が周期的に変動すると判定された場合に、前記顔の変動の中心位置を前記断続制御に用いる、音声出力制御装置。
    The audio output control device according to claim 1.
    The control unit
    A voice output control device that uses the center position of the fluctuation of the face for the intermittent control when it is determined that the position of the face acquired by the acquisition unit fluctuates periodically.
  3.  請求項1に記載の音声出力制御装置であって、
     前記車両の前記搭乗者は、前記車両の運転者、及び、前記車両の同乗者の少なくともいずれかを含む、音声出力制御装置。
    The audio output control device according to claim 1.
    A voice output control device, wherein the passenger of the vehicle includes at least one of a driver of the vehicle and a passenger of the vehicle.
  4.  請求項1に記載の音声出力制御装置であって、
     前記制御部は、
     前記リアルタイム制御の処理負荷と予め定められた制御の処理負荷との和が、予め定められた処理負荷よりも小さいと判定された場合には、前記位置固定時間が前記予め定められた時間よりも大きいか否かについての判定結果に関わらず、前記リアルタイム制御を行う、音声出力制御装置。
    The audio output control device according to claim 1.
    The control unit
    When it is determined that the sum of the processing load of the real-time control and the processing load of the predetermined control is smaller than the predetermined processing load, the position fixing time is longer than the predetermined time. An audio output control device that performs the real-time control regardless of the determination result as to whether or not it is large.
  5.  請求項1に記載の音声出力制御装置であって、
     前記取得部で取得された頻度が最も高い前記顔の位置を前記断続制御に用いる、音声出力制御装置。
    The audio output control device according to claim 1.
    A voice output control device that uses the position of the face most frequently acquired by the acquisition unit for the intermittent control.
  6.  請求項1に記載の音声出力制御装置であって、
     前記取得部は、
     前記車両の室内の画像に基づいて前記複数の音声出力装置の位置をさらに取得し、
     前記制御部は、
     前記取得部で取得された前記複数の音声出力装置の位置を、前記リアルタイム制御及び前記断続制御に用いる、音声出力制御装置。
    The audio output control device according to claim 1.
    The acquisition unit
    The positions of the plurality of audio output devices are further acquired based on the image of the interior of the vehicle.
    The control unit
    An audio output control device that uses the positions of the plurality of audio output devices acquired by the acquisition unit for the real-time control and the intermittent control.
  7.  請求項6に記載の音声出力制御装置であって、
     前記取得部は、
     前記車両の室内の画像に基づいて前記車両を識別し、当該識別結果に基づいて前記複数の音声出力装置の位置を取得する、音声出力制御装置。
    The audio output control device according to claim 6.
    The acquisition unit
    A voice output control device that identifies the vehicle based on an image of the interior of the vehicle and acquires the positions of the plurality of voice output devices based on the identification result.
  8.  請求項1に記載の音声出力制御装置であって、
     前記制御部は、
     前記リアルタイム制御に用いられるパラメータの変化が予め定められた閾値よりも大きい場合には、前記位置固定時間が前記予め定められた時間よりも大きいか否かについての判定結果に関わらず、前記断続制御を行う、音声出力制御装置。
    The audio output control device according to claim 1.
    The control unit
    When the change of the parameter used for the real-time control is larger than the predetermined threshold value, the intermittent control is performed regardless of the determination result as to whether or not the position fixing time is larger than the predetermined time. Audio output control device.
  9.  車両の搭乗者の顔の位置に基づいて、前記車両に搭載された複数の音声出力装置をリアルタイムで制御するリアルタイム制御を行う音声出力制御方法であって、
     前記搭乗者の顔の位置を取得し、
     取得された前記顔の位置が固定されている時間である位置固定時間が、予め定められた時間よりも大きいと判定された場合に、前記リアルタイム制御を行い、前記位置固定時間が、前記予め定められた時間以下であると判定された場合に、取得された前記顔の位置に基づいて、前記複数の音声出力装置を前記リアルタイム制御よりも断続的に制御する断続制御を行う、音声出力制御方法。
    It is a voice output control method that performs real-time control for controlling a plurality of voice output devices mounted on the vehicle in real time based on the position of the face of the passenger of the vehicle.
    Obtain the position of the passenger's face and
    When it is determined that the acquired position fixing time, which is the time when the face position is fixed, is longer than the predetermined time, the real-time control is performed and the position fixing time is set in advance. A voice output control method that performs intermittent control that intermittently controls the plurality of voice output devices rather than the real-time control based on the acquired position of the face when it is determined that the time is less than or equal to the specified time. ..
PCT/JP2019/015251 2019-04-08 2019-04-08 Audio output control device and audio output control method WO2020208667A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2021513033A JP6887588B2 (en) 2019-04-08 2019-04-08 Audio output control device and audio output control method
PCT/JP2019/015251 WO2020208667A1 (en) 2019-04-08 2019-04-08 Audio output control device and audio output control method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2019/015251 WO2020208667A1 (en) 2019-04-08 2019-04-08 Audio output control device and audio output control method

Publications (1)

Publication Number Publication Date
WO2020208667A1 true WO2020208667A1 (en) 2020-10-15

Family

ID=72751966

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/015251 WO2020208667A1 (en) 2019-04-08 2019-04-08 Audio output control device and audio output control method

Country Status (2)

Country Link
JP (1) JP6887588B2 (en)
WO (1) WO2020208667A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006193057A (en) * 2005-01-14 2006-07-27 Nagano Kogaku Kenkyusho:Kk Vehicle monitoring unit and room mirror apparatus
JP2008236397A (en) * 2007-03-20 2008-10-02 Fujifilm Corp Acoustic control system
JP2016199124A (en) * 2015-04-09 2016-12-01 之彦 須崎 Sound field control device and application method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2017175448A1 (en) * 2016-04-05 2019-02-14 ソニー株式会社 Signal processing apparatus, signal processing method, and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006193057A (en) * 2005-01-14 2006-07-27 Nagano Kogaku Kenkyusho:Kk Vehicle monitoring unit and room mirror apparatus
JP2008236397A (en) * 2007-03-20 2008-10-02 Fujifilm Corp Acoustic control system
JP2016199124A (en) * 2015-04-09 2016-12-01 之彦 須崎 Sound field control device and application method

Also Published As

Publication number Publication date
JP6887588B2 (en) 2021-06-16
JPWO2020208667A1 (en) 2021-09-13

Similar Documents

Publication Publication Date Title
US20180270571A1 (en) Techniques for amplifying sound based on directions of interest
US10414411B2 (en) Notification control apparatus and notification control method
WO2017113937A1 (en) Mobile terminal and noise reduction method
US11134353B2 (en) Customized audio processing based on user-specific and hardware-specific audio information
US10812031B2 (en) Electronic device and method for adjusting gain of digital audio signal based on hearing recognition characteristics
US20140003630A1 (en) Audio processor and computer program product
US11410654B2 (en) Sound system of vehicle and control method thereof
JP2022000960A (en) Sound output device
KR20190031564A (en) Control method, apparatus, program and storage medium of an equalizer
JP2011135178A (en) Acoustic device, acoustic system and sound signal control method
JP6887588B2 (en) Audio output control device and audio output control method
WO2018061956A1 (en) Conversation assist apparatus and conversation assist method
KR102443637B1 (en) Electronic device for determining noise control parameter based on network connection inforiton and operating method thereof
JP6794710B2 (en) Sound environment adjustment device and sound environment adjustment method
KR102491646B1 (en) Method for processing a audio signal based on a resolution set up according to a volume of the audio signal and electronic device thereof
WO2019215811A1 (en) Drive assist device and drive assist method
JP2008247125A (en) On-vehicle audio device and its control method
KR20180076464A (en) Drowsiness drive warning system
JP2019140571A (en) Content reproduction control device, content reproduction system, and program
CN110621384B (en) Information processing apparatus, information processing method, and program
US10425731B2 (en) Audio processing apparatus, audio processing method, and program
US11328720B2 (en) Inter-occupant conversation device and inter-occupant conversation method
JP6524350B2 (en) Wireless communication apparatus and wireless communication method
JP2005223445A (en) Volume controller
JP2007286490A (en) Sound output apparatus for vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924387

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021513033

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924387

Country of ref document: EP

Kind code of ref document: A1