WO2018079527A1 - Dispositif de commande, procédé de sortie et programme - Google Patents

Dispositif de commande, procédé de sortie et programme Download PDF

Info

Publication number
WO2018079527A1
WO2018079527A1 PCT/JP2017/038289 JP2017038289W WO2018079527A1 WO 2018079527 A1 WO2018079527 A1 WO 2018079527A1 JP 2017038289 W JP2017038289 W JP 2017038289W WO 2018079527 A1 WO2018079527 A1 WO 2018079527A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound
vehicle
destination
sounds
control device
Prior art date
Application number
PCT/JP2017/038289
Other languages
English (en)
Japanese (ja)
Inventor
洋一 奥山
洋人 河内
昭光 藤吉
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Publication of WO2018079527A1 publication Critical patent/WO2018079527A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response

Definitions

  • the invention according to claim 8 provides: Computer A sound collection process for collecting sounds outside the moving body; A processing step of processing the sound collected by the sound collection unit so as to emphasize the sound characterized by the external environment from other sounds different from the characterized sound; Is the output method to execute.
  • the invention according to claim 9 is: Computer Sound collecting means for collecting sounds outside the moving body, Processing means for processing a sound characterized by the external environment among sounds collected by the sound collecting unit so as to be emphasized over other sounds different from the characterized sound; It is a program that functions as
  • the control device of this embodiment is an in-vehicle device mounted on a vehicle, or a mobile phone, a smartphone, a tablet terminal, or the like brought into the vehicle.
  • the control device includes a sound collection unit and a processing unit.
  • the sound collection unit collects sound outside the vehicle (moving body).
  • the processing unit emphasizes the sound collected by the sound collection unit over sounds that are characterized by the external environment and other sounds that are characterized by the destination of the vehicle and that are different from the sounds that are characterized. To output from the speaker. For example, the desired sound is emphasized by reducing the sound of a predetermined frequency by filtering.
  • Sound characterized by external environment are sounds that characteristically represent the external environment. For example, if the vehicle is located near the sea, the sound of waves, if it is located in the mountains, the sound of birds, the sound of leaves passing each other, the sound of the river, and the sound of children if it is located in the school zone.
  • the “sound characterized by the destination of the moving object” is a sound characteristically representing the destination. For example, when the destination is the sea, the sound of a wave, and when the destination is a mountain, the sound of a bird, the sound of foliage, the sound of a river, etc.
  • the sound that characterizes the current external environment and the sound that characterizes the destination can be emphasized over other sounds and transmitted to the person in the vehicle. Humans can quickly and accurately grasp the current external environment based on the sound. As a result, safety can be ensured, and you can enjoy driving while feeling close to your destination or recognizing the external environment of your current location.
  • the in-vehicle device 10 of the present embodiment includes a sound collection unit 11, a processing unit 12, and a detection unit 13.
  • Each functional unit shown in the functional block diagram is a CPU (Central Processing Unit) of an arbitrary computer, a memory, a program loaded into the memory, a storage unit such as a hard disk for storing the program (stored from the stage of shipping the device in advance) It can also store programs downloaded from CDs (Compact Discs) and other servers and servers on the Internet), and any combination of hardware and software, centering on the network connection interface Realized.
  • CPU Central Processing Unit
  • FIG. 2 is a block diagram illustrating a hardware configuration of the in-vehicle device 10 of the present embodiment.
  • the in-vehicle device 10 includes a processor 1A, a memory 2A, an input / output interface 3A, a peripheral circuit 4A, and a bus 5A.
  • the peripheral circuit 4A includes various modules. The peripheral circuit 4A may not be provided.
  • the bus 5A is a data transmission path through which the processor 1A, the memory 2A, the peripheral circuit 4A, and the input / output interface 3A transmit / receive data to / from each other.
  • the processor 1A is an arithmetic processing unit such as a CPU (Central Processing Unit) or a GPU (Graphics Processing Unit).
  • the memory 2A is a memory such as a RAM (Random Access Memory) or a ROM (Read Only Memory).
  • the input / output interface 3A is an interface for acquiring information from an input device (eg, keyboard, mouse, microphone, etc.), external device, external server, external sensor, etc., and an output device (eg, display, speaker, printer, mailer). Etc.), an interface for outputting information to an external device, an external server, etc.
  • the processor 1A can issue a command to each module and perform a calculation based on the calculation result.
  • the detection unit 13 acquires information indicating the destination of the host vehicle from the route search device.
  • the route search device accepts an input from the user specifying a destination. Then, the route search device searches for a route from the departure place (eg, current position or position specified by the user) to the destination, and provides it to the user. Further, the route search device can guide to the destination based on the determined route and the current position.
  • a route search device can be realized according to the prior art.
  • the detection unit 13 may acquire information indicating the waypoint instead of or in addition to the destination. In the following, the transit point is included in the destination.
  • the in-vehicle device 10 may be physically and / or logically integrated with the route search device.
  • the in-vehicle device 10 may be configured to be physically and / or logically separated from the route search device. In the latter case, the in-vehicle device 10 and the route search device are configured to transmit and receive information by wired and / or wireless communication.
  • the detection unit 13 may further detect the current position of the host vehicle.
  • the means for acquiring the current position may be, for example, one using GPS (Global Positioning System), but is not particularly limited.
  • the processing unit 12 specifies the type of the destination when receiving the information indicating the destination from the detection unit 13.
  • the processing unit 12 can specify the destination type using map information that is held in advance, correspondence information that associates the destination with the destination type, or the like.
  • the amplification factor of the signal input to the speaker is changed as described above, the sound related to the destination can be output from the speaker in the vehicle with a sufficient sound volume when sufficiently close to the destination. it can. As a result, the above inconvenience can be reduced.
  • the processing unit 12 may perform processing such that the sound characterized by the current position is emphasized over other sounds.
  • the processing unit 12 can realize the process so as to emphasize the sound characterized at the current position from the other sounds in the same manner as the above-described process of emphasizing the sound characterized by the destination from other sounds. . That is, the processing unit 12 may select one of a plurality of frequency filters provided in advance based on the current position. Then, the processing unit 12 may perform filter processing on the electrical signal using the selected frequency filter, and output sound from the speaker based on the filtered electrical signal. For example, the processing unit 12 may prepare a frequency filter for each type of position. Examples of the position type include “near the sea”, “mountain”, “near the zoo”, and the like, but are not limited thereto.
  • the processing unit 12 determines an output condition based on the destination, and sets the determined content (S21). For example, the processing unit 12 determines a frequency filter to be applied based on the destination, and sets the determined frequency filter. Further, the processing unit 12 may continue monitoring the distance between the current position and the destination, and determine and set the amplification factor of the signal input to the speaker according to the distance. Thereafter, the same processing is repeated.
  • the in-vehicle device 10 of the present embodiment can output sound outside the vehicle from a speaker installed in the vehicle under conditions determined based on the destination.
  • the in-vehicle device 10 outputs a sound outside the vehicle from a speaker after performing a filtering process with a frequency filter selected based on the destination and the current position.
  • a frequency filter selected based on the destination and the current position.
  • sounds related to the destination and the current position for example, a sound characterized by the destination and the current position are left, and other sounds are reduced.
  • sound outside the vehicle can be output from a speaker installed in the vehicle.
  • a person in the vehicle can quickly and accurately grasp the current external environment based on the sound output from the speaker.
  • safety can be ensured, and you can enjoy driving while feeling close to your destination or recognizing the external environment of your current location.
  • the in-vehicle device 10 can change the amplification factor of the signal input to the speaker according to the distance between the destination and the current position, that is, the distance to the destination. For example, the amplification factor of “when the distance between the destination and the current position is less than a predetermined value” and the amplification factor of “when the distance between the destination and the current position is greater than or equal to a predetermined value”. Can be larger. Note that when “the distance between the destination and the current position is greater than or equal to a predetermined value”, the amplification factor may be set to “0” (that is, sound outside the vehicle is not output from the speaker).
  • Sounds related to the destination are emitted from other destinations.
  • the sound of a wave etc. is illustrated as a sound relevant to the destination.
  • the sound of the waves is emitted not only from the beach A at the destination but also from the beach B, the port C, and the like. For this reason, if a sound outside the vehicle is output from the speaker under the same conditions regardless of the distance to the destination, a situation may occur in which a sound related to the destination is notified to a person in the vehicle at a location different from the destination. obtain.
  • the amplification factor of the signal input to the speaker is changed as described above, the sound related to the purpose can be output from the speaker in the vehicle with a sufficient sound volume when sufficiently close to the destination. . As a result, the above inconvenience can be reduced.
  • the frequency filter is selected based on the destination. .
  • the frequency filter that passes the sound related to the destination is selected.
  • the sound to be notified to the person in the vehicle can be appropriately switched according to the distance to the destination.
  • each functional unit Since an example of the hardware configuration of each functional unit is as described in the first embodiment, description thereof is omitted here.
  • functions of the respective functional units shown in the functional block diagram of FIG. 1 will be described in detail.
  • the configurations of the sound collection unit 11 and the detection unit 13 are the same as those in the first embodiment.
  • the processing unit 12 determines a sound output condition outside the vehicle based on the destination and the current position. For example, the processing unit 12 determines a frequency filter to be applied or determines an amplification factor of a signal input to the speaker. And the process part 12 outputs the sound outside a vehicle from the speaker installed in the vehicle on the determined conditions.
  • a specific example of processing by the processing unit 12 will be described.
  • the processing unit 12 can calculate the distance between the destination and the current position. And when the said distance is more than predetermined value (design matter), the process part 12 determines conditions based on a present position. On the other hand, when the distance is less than the predetermined value (design matter), the processing unit 12 determines a condition based on the destination.
  • the process for determining the condition based on the destination is the same as in the first embodiment.
  • second process an example of a process for determining a condition based on the current position
  • the processing unit 12 can select a frequency filter used for the filter processing from a plurality of frequency filters based on the current position.
  • a frequency band to be passed and a frequency band to be reduced are different from each other.
  • “frequency filter that passes the components of the frequency band of the human voice and reduces the components of other frequency bands” “passes the components of the frequency band of the human child's voice
  • “Frequency filter that reduces other frequency band components” “Frequency that reduces components in other frequency bands through the frequency band components of bicycle sounds (eg, brake sound, bell sound, wheel rotation sound, etc.)
  • “Filter” “Frequency filter that passes the components of the frequency band of the sea wave sound and reduces the components of the other frequency bands”
  • the component of the other frequency bands that passes the components of the frequency band of the bird call” Any one or more of “frequency filters for reducing the frequency of the frequency filter” may be included.
  • the illustrated frequency filter is merely an example, and is not limited thereto.
  • the processing unit 12 can select a frequency filter based on the current position detected by the detection unit 13. For example, the processing unit 12 may specify an attribute (position attribute) of the current position. Then, the processing unit 12 may select a frequency filter corresponding to the specified position attribute.
  • position attribute position attribute
  • the processing unit 12 may select a frequency filter corresponding to the specified position attribute.
  • the processing unit 12 selects a frequency filter corresponding to the position attribute of the specified current position. For example, as illustrated in FIG. 6, the processing unit 12 may hold correspondence information in which position attributes are associated with frequency filters. Then, the processing unit 12 may select a frequency filter corresponding to the position attribute of the current position based on the correspondence information.
  • the frequency filter corresponding to “position attribute: school zone” may be, for example, “a frequency filter that passes a frequency band component of a human child's voice and reduces other frequency band components”.
  • the frequency filter corresponding to “position attribute: heavy bicycle traffic” is, for example, “passes the frequency band component of the bicycle sound (eg, brake sound, bell sound, wheel rotation sound, etc.) and others.
  • the frequency filter corresponding to “position attribute: along the sea” may be, for example, “a frequency filter that passes the components of the frequency band of the sound of the sea wave and reduces the components of other frequency bands”.
  • the frequency filter corresponding to “position attribute: a bird's cry can be heard” may be, for example, “a frequency filter that passes the components of the frequency band of the bird's cry and reduces the components of other frequency bands”.
  • the processing unit 12 performs a filtering process on the electrical signal generated by the sound collection unit 11 using, for example, the frequency filter selected as described above.
  • the processing unit 12 may hold correspondence information in which a plurality of combinations of position attributes and frequency filters are associated with each other.
  • a frequency filter may be associated with a combination of “position attribute: school zone” and “position attribute: heavy bicycle traffic”.
  • the frequency filter is “passes the frequency band component of the voice of a human child and the frequency band component of a bicycle sound (eg, brake sound, bell sound, wheel rotation sound, etc.) and other frequency band. It may be a “frequency filter that reduces components”.
  • the processing unit 12 may not select a frequency filter. In this case, a sound outside the vehicle may be output from the speaker without performing the filtering process. Alternatively, the processing unit 12 may not output sound outside the vehicle from the speaker at the current position.
  • the sound that passes through the frequency filter and is output from the speaker that is, the sound that is notified to the person in the vehicle differs.
  • the voice of a child is notified
  • the sound of a bicycle is notified at a position where there is a lot of bicycle traffic
  • the sound of a sea wave is notified along the sea
  • the sound of a bird is heard at a position where the sound of a bird can be heard. It is a condition of notifying.
  • the processing unit 12 may change the amplification factor of the signal input to the speaker according to the sound characterized by the current position (that is, according to the current position). Specifically, the processing unit 12 sets the amplification factor so that the sound informing the person in the vehicle at the current position can be informed to the person in the vehicle with an appropriate volume (not too loud and not too small). It may be set.
  • the processing unit 12 can determine a part of microphones (first group of microphones) according to the sound characterized by the current position of the host vehicle (that is, according to the current position). That is, the processing unit 12 can group a plurality of microphones into a first group and a second group based on the current position of the host vehicle.
  • the processing unit 12 sets the amplification factor of the sound signal collected by the microphone located on the side where the sea exists to the other microphone. You may make it larger than the amplification factor of the signal of the collected sound.
  • the processing unit 12 classifies microphones located on the right side of the host vehicle (eg, right front, right rear, etc.) into the first group, and the left side ( (Example: left front, left rear, etc.) The microphones located in the left group may be classified into the second group.
  • the amplification factor of the sound signal collected by the first group may be larger than the amplification factor of the sound signal collected by the second group.
  • the sound collecting unit 11 collects the sound outside the vehicle (S10). And the process part 12 processes an electrical signal, and makes the sound outside a vehicle output from the speaker installed in the vehicle on the set output conditions (S11). Thereafter, the same processing is repeated.
  • the detection unit 13 continues to monitor the distance between the destination of the host vehicle and the current position. If the distance is equal to or greater than the predetermined value (Yes in S30), the detection unit 13 determines and sets an output condition based on the current position (S31). On the other hand, when the distance is less than the predetermined value (No in S30), the detection unit 13 determines and sets an output condition based on the destination (S32). Thereafter, the same processing is repeated.
  • the sound is hard to hear, there may be a problem that people in the car miss the sound. Also, if the sound is too loud, it can hinder driving. In addition, if the sound is too loud, a person in the vehicle may change the speaker volume setting to be small, and thereafter a problem such as difficulty in hearing a predetermined sound may occur.
  • the in-vehicle device 10 that can change the amplification factor of the signal input to the speaker in accordance with the current position and adjust the sound to be notified to the person in the vehicle at each position so that the sound is output from the speaker at an appropriate volume. The above inconvenience can be reduced.
  • the in-vehicle device 10 receives a sound signal collected by a microphone that easily collects a sound to be notified to a person in the vehicle (for example, a microphone on a side where a sound source of a sound to be notified to a person in the vehicle is located).
  • the amplification factor can be made larger than the amplification factor of the sound signal collected by the other microphone and output from the speaker. According to such an in-vehicle device 10, it is possible to emphasize a sound to be notified to a person in the vehicle and output it from a speaker. As a result, it is possible to reliably notify the person in the vehicle of the sound that should be notified to the person in the vehicle at each location, as compared with the prior art.
  • an alarm sound may be output from the speaker instead of outputting the sound outside the vehicle from the speaker inside the vehicle.
  • sound outside the vehicle may be output from a speaker inside the vehicle.
  • an emergency sound from an emergency vehicle existing around the host vehicle is detected, a sound outside the vehicle is not output from the speaker inside the vehicle, but instead a buzzer sound set in the control device in advance from the speaker.
  • An emergency sound may be output.
  • other sounds may be reduced and output from the speaker in the vehicle while leaving a sound such as an alarm sound such as a siren or a voice from the emergency vehicle speaker. .
  • the detection unit 13 may detect the automatic driving level permitted for the road on which the own vehicle is traveling. Good. Then, the processing unit 12 may process the sound collected by the sound collecting unit 11 based on the automatic driving level allowed on the road on which the host vehicle is traveling.
  • the processing unit 12 may output the sound collected by the sound collecting unit 11 from a speaker installed in the vehicle when the automatic driving level allowed on the road on which the host vehicle is traveling is 0 to 2.
  • the automatic driving level is 0 to 2
  • the processing unit 12 may not output the sound collected by the sound collecting unit 11 from the speaker installed in the vehicle.
  • the automatic driving level is 3 to 5 since the driver is not involved in driving control, the necessity of outputting sound outside the vehicle is low.
  • the detection unit 13 may further detect whether or not the condition is satisfied. Then, the detection unit 13 may notify the processing unit 12 of the result.
  • the detection unit 13 can realize the detection using a known technique.
  • the processing unit 12 is installed in the vehicle with the sound collected by the sound collecting unit 11 if the automatic driving conditions are satisfied.
  • the sound collected by the sound collection unit 11 may be output from a speaker installed in the vehicle if the automatic driving condition is not satisfied without being output from the speaker.
  • the detection unit 13 detects whether the map information corresponding to the current position of the host vehicle is a high-precision map.
  • the processing unit 12 does not output the sound collected by the sound collecting unit 11 from the speaker installed in the vehicle.
  • the map information corresponding to the current position of the host vehicle is not a high-precision map, that is, if automatic driving is not possible, the sound collected by the sound collection unit 11 is output from a speaker installed in the vehicle.
  • the sound amplification factor when the sound collected by the sound collection unit 11 is output from a speaker installed in the vehicle may be arbitrarily determined according to the current position. For example, the amplification factor may be increased at intersections and urban areas. In addition, the amplification factor may be lowered in a mountainous area where there is little traffic.
  • the detection unit 13 detects objects (obstacles, vehicles ahead) by sensors (camera, lidar (LiDAR: Light Detection and Ranging), radar, etc.) mounted on the host vehicle. , Signs, white lines, etc.) may be detected. And even if the process part 12 changes the control content of the process which outputs the sound which the sound collection part 11 collected from the speaker installed in the vehicle according to the detection accuracy of the sensor mounted in the own vehicle. Good.
  • the sound collecting unit 11 collects sound. Sound is output from a speaker installed in the car.
  • the sound amplification factor when the sound collected by the sound collection unit 11 is output from a speaker installed in the vehicle may be arbitrarily determined according to the current position. For example, the amplification factor may be increased at intersections and urban areas. In addition, the amplification factor may be lowered in a mountainous area where there is little traffic.
  • the detection part 13 detects that it is difficult to continue automatic driving
  • the control device may be a mobile terminal such as a mobile phone, a smartphone, or a tablet terminal brought into the vehicle.
  • the sound outside the vehicle collected by the microphone installed in the vehicle is transmitted to the portable terminal by wire or wireless.
  • sounds outside the vehicle can be output (notified in the vehicle) from a speaker mounted on the mobile terminal.
  • the speaker which outputs the sound outside the vehicle to the inside of the vehicle may be a speaker connected to the portable terminal by wire or wireless instead of the speaker mounted on the portable terminal.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • General Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

L'invention concerne un dispositif de commande (dispositif embarqué (10)) qui comprend: une unité (11) de recueil de son qui recueille un son extérieur à un corps en mouvement; et une unité (12) de traitement qui traite des sons caractérisés par l'environnement extérieur, parmi les sons recueillis par l'unité (11) de recueil de son, de façon à accentuer les sons caractérisés plus que d'autres sons qui diffèrent des sons caractérisés.
PCT/JP2017/038289 2016-10-25 2017-10-24 Dispositif de commande, procédé de sortie et programme WO2018079527A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016208480 2016-10-25
JP2016-208480 2016-10-25

Publications (1)

Publication Number Publication Date
WO2018079527A1 true WO2018079527A1 (fr) 2018-05-03

Family

ID=62023471

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/038289 WO2018079527A1 (fr) 2016-10-25 2017-10-24 Dispositif de commande, procédé de sortie et programme

Country Status (1)

Country Link
WO (1) WO2018079527A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020255246A1 (fr) * 2019-06-18 2020-12-24 三菱電機株式会社 Dispositif d'aide à la conduite, système d'aide à la conduite et procédé d'aide à la conduite
JP2022035024A (ja) * 2020-08-20 2022-03-04 トヨタ自動車株式会社 情報処理装置、情報処理方法、及び情報処理プログラム
JP2022165229A (ja) * 2021-04-19 2022-10-31 株式会社カプコン システム、プログラムおよび移動体

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009051333A (ja) * 2007-08-27 2009-03-12 Nissan Motor Co Ltd 車両用聴覚モニタ装置
JP2009113659A (ja) * 2007-11-07 2009-05-28 Toyota Motor Corp 車両用ノイズキャンセル装置
JP2011233090A (ja) * 2010-04-30 2011-11-17 Toyota Motor Corp 車外音検出装置
JP2013149080A (ja) * 2012-01-19 2013-08-01 Denso Corp 音声出力装置
JP2015217798A (ja) * 2014-05-16 2015-12-07 三菱電機株式会社 車載情報表示制御装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009051333A (ja) * 2007-08-27 2009-03-12 Nissan Motor Co Ltd 車両用聴覚モニタ装置
JP2009113659A (ja) * 2007-11-07 2009-05-28 Toyota Motor Corp 車両用ノイズキャンセル装置
JP2011233090A (ja) * 2010-04-30 2011-11-17 Toyota Motor Corp 車外音検出装置
JP2013149080A (ja) * 2012-01-19 2013-08-01 Denso Corp 音声出力装置
JP2015217798A (ja) * 2014-05-16 2015-12-07 三菱電機株式会社 車載情報表示制御装置

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020255246A1 (fr) * 2019-06-18 2020-12-24 三菱電機株式会社 Dispositif d'aide à la conduite, système d'aide à la conduite et procédé d'aide à la conduite
JPWO2020255246A1 (ja) * 2019-06-18 2021-11-18 三菱電機株式会社 運転支援装置、運転支援システムおよび運転支援方法
JP2022035024A (ja) * 2020-08-20 2022-03-04 トヨタ自動車株式会社 情報処理装置、情報処理方法、及び情報処理プログラム
JP7347368B2 (ja) 2020-08-20 2023-09-20 トヨタ自動車株式会社 情報処理装置、情報処理方法、及び情報処理プログラム
JP2022165229A (ja) * 2021-04-19 2022-10-31 株式会社カプコン システム、プログラムおよび移動体
JP7273328B2 (ja) 2021-04-19 2023-05-15 株式会社カプコン システム、プログラムおよび移動体

Similar Documents

Publication Publication Date Title
KR102465970B1 (ko) 주변 상황에 기초하여 음악을 재생하는 방법 및 장치
US10214145B2 (en) Vehicle control device and vehicle control method thereof
US20190220248A1 (en) Vehicle with external audio speaker and microphone
US20140056438A1 (en) System for vehicle sound synthesis
US20180290590A1 (en) Systems for outputting an alert from a vehicle to warn nearby entities
JP2016515275A (ja) 統合型ナビゲーション及び衝突回避システム
WO2018079527A1 (fr) Dispositif de commande, procédé de sortie et programme
JP2007279975A (ja) 車載装置、音声情報提供システムおよび発話速度調整方法
US10741076B2 (en) Cognitively filtered and recipient-actualized vehicle horn activation
JP2008114649A (ja) クラクション制御装置
JP6690007B2 (ja) 処理装置、サーバ装置、出力方法及びプログラム
JP2015057686A (ja) 注意喚起装置
WO2018163545A1 (fr) Dispositif et procédé de traitement d'informations, et support d'enregistrement
JP6726297B2 (ja) 処理装置、サーバ装置、出力方法及びプログラム
JP4873255B2 (ja) 車両用報知システム
JP5211747B2 (ja) 音響制御装置及び音響制御プログラム
US20200221250A1 (en) System and method for velocity-based geofencing for emergency vehicle
JP4051990B2 (ja) 安全運転支援装置及び画像処理装置
US11705141B2 (en) Systems and methods to reduce audio distraction for a vehicle driver
US11981265B2 (en) In-vehicle device and method for controlling in-vehicle device
WO2018139650A1 (fr) Dispositif de commande audio, procédé de commande audio et programme
WO2023204076A1 (fr) Procédé de commande acoustique et dispositif de commande acoustique
JP2013182585A (ja) 運転支援装置及びプログラム
EP4273832A1 (fr) Véhicule et système et procédé pour une utilisation avec un véhicule
WO2018062476A1 (fr) Dispositif embarqué, procédé de génération et programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17864878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17864878

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP