WO2024062757A1 - Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations - Google Patents

Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations Download PDF

Info

Publication number
WO2024062757A1
WO2024062757A1 PCT/JP2023/026774 JP2023026774W WO2024062757A1 WO 2024062757 A1 WO2024062757 A1 WO 2024062757A1 JP 2023026774 W JP2023026774 W JP 2023026774W WO 2024062757 A1 WO2024062757 A1 WO 2024062757A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sound
sound data
vehicle
processing
Prior art date
Application number
PCT/JP2023/026774
Other languages
English (en)
Japanese (ja)
Inventor
正寛 中西
威 岡見
信晃 姫野
宏親 前垣
誠治 平出
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2024062757A1 publication Critical patent/WO2024062757A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/04Sound-producing devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present disclosure relates to a technology for processing sound data used in a sound output device.
  • a sound output device placed near a user accesses a user profile placed on a cloud, and determines processing parameters for sound data based on the user profile.
  • the sound output device outputs to the user sound based on the sound data processed using the processing parameters.
  • processing of sound data is performed by a sound output device. Therefore, it is necessary to equip the sound output device with a high-performance control device that can process sound data, which poses a problem in that the cost of the sound output device increases.
  • One aspect of the present disclosure aims to reduce the cost of a sound output device.
  • an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and which generates input sound data.
  • an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; an output sound generation unit that generates output sound data to be used in the first sound output device by performing acoustic processing; and a data transmission control unit that transmits the output sound data to the first sound output device via a network.
  • an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and includes first information regarding attributes of input sound data. and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices, and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter used in acoustic processing that imparts an acoustic effect to data; and a sound processing device that performs the acoustic processing on the input sound data based on at least one of the first information and the second information. and a device determining unit that determines the.
  • an information processing system includes: a plurality of sound output devices; and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing device includes a data acquisition unit that acquires input sound data, first information regarding an attribute of the input sound data, and first information regarding one of the plurality of sound output devices.
  • an information acquisition unit that acquires at least one of the second information; and a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information.
  • an output sound generation section that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using the parameters; and a data transmission control section that transmits the data to the first sound output device.
  • an information processing system includes: a plurality of sound output devices; and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing device includes an information acquisition unit that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices.
  • a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of the first information and the second information; and a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the information.
  • an information processing method is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, and the information processing method acquires input sound data. and acquiring at least one of first information regarding attributes of the input sound data and second information regarding one of the plurality of sound output devices, and acquiring at least one of the first information and the second information. determining parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on one of the above, and applying the acoustic processing to the input sound data using the parameters; output sound data is generated, and the output sound data is transmitted to the one sound output device via the network.
  • an information processing method is an information processing method that is realized by a computer and performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, the method comprising: At least one of first information regarding attributes and second information regarding one of the plurality of sound output devices is acquired, and based on at least one of the first information and the second information, the input determining parameters to be used in acoustic processing for imparting acoustic effects to the sound data, and determining a sound processing device for performing the acoustic processing on the input sound data based on at least one of the first information and the second information; .
  • FIG. 1 is a diagram illustrating a configuration of an information processing system 1 according to a first embodiment.
  • FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do.
  • FIG. 3 is a diagram schematically showing an example of a data flow between a distribution server 30 and an in-vehicle audio device 20A.
  • FIG. 7 is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • 1 is a block diagram showing the configuration of an acoustic server 10.
  • FIG. FIG. 2 is a block diagram showing the configuration of an in-vehicle audio device 20A.
  • 3 is a diagram illustrating the arrangement of speakers 230 in vehicle C.
  • FIG. 5 is a flowchart showing the operation of the control device 103 of the audio server 10.
  • FIG. 11 is a block diagram showing a configuration of a sound server 10 according to a second embodiment.
  • FIG. 3 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. It is a block diagram showing the composition of 20A of vehicle-mounted sound devices in a 3rd embodiment.
  • FIG. 3 is a diagram illustrating the configuration of an information processing system 2 according to a fourth embodiment. It is a block diagram showing the composition of audio server 10 in a 4th embodiment. It is a flowchart which shows operation of control device 103 of audio server 10 in a 4th embodiment.
  • FIG. 1 is a diagram illustrating the configuration of an information processing system 1 according to a first embodiment.
  • the information processing system 1 includes an audio server 10 and a plurality of in-vehicle audio devices 20 (20A to 20N).
  • the audio server 10 is an example of an information processing device and a computer, and the in-vehicle audio devices 20A to 20N are an example of a plurality of sound output devices.
  • the audio server 10 and each of the plurality of in-vehicle audio devices 20A to 20N are connected to a network N.
  • the network N may be a wide area network such as the Internet, or may be a local area network (LAN) of a facility or the like.
  • LAN local area network
  • the in-vehicle audio devices 20A to 20N are each mounted on a vehicle C (see FIG. 5) such as an automobile, and output sound into the cabin of the vehicle C from a speaker 230 (see FIG. 5).
  • Each of the plurality of in-vehicle audio devices 20A to 20N is mounted on a plurality of different vehicles C.
  • the functions will be explained focusing on one of the plurality of vehicle-mounted sound devices 20A to 20N, but the other vehicle-mounted sound devices 20B to 20N also have the same functions as the vehicle-mounted sound device 20A.
  • the vehicle-mounted audio device 20A is an example of one sound output device among the plurality of vehicle-mounted audio devices 20A to 20N.
  • the sounds output from the in-vehicle audio devices 20A to 20N are, for example, sounds such as songs or radio broadcasts, guidance sounds from the navigation device 52, or warning sounds from the safety system of the vehicle C.
  • the audio server 10 generates a plurality of output sound data Do (see FIG. 2) used in each of the plurality of in-vehicle audio devices 20A to 20N.
  • FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do.
  • one vehicle-mounted audio device 20A among the plurality of vehicle-mounted audio devices 20A to 20N is taken as an example.
  • the audio server 10 acquires the sound data of the sound output by the in-vehicle audio device 20A as input sound data Di.
  • the input sound data Di includes at least one of local sound data Dsl transmitted from the in-vehicle audio devices 20A to 20N and distributed sound data Dsn distributed from the distribution server 30.
  • the distribution server 30 is a server that distributes sound data via the network N.
  • the acoustic server 10 generates output sound data Do by performing acoustic processing to add acoustic effects to the input sound data Di, and transmits the output sound data Do to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A that has received the output sound data Do outputs sound based on the output sound data Do from the speaker 230.
  • the audio server 10 similarly transmits the output sound data Do to the other vehicle-mounted audio devices 20B to 20N.
  • FIG. 4 is a block diagram showing the configuration of the audio server 10.
  • the acoustic server 10 includes a communication device 101, a storage device 102, and a control device 103.
  • the communication device 101 communicates with other devices using wireless communication or wired communication.
  • the communication device 101 includes a communication interface connectable to the network N using wired communication, and communicates with the in-vehicle audio devices 20A to 20N via the network N. Furthermore, the communication device 101 communicates with the distribution server 30 via the network N.
  • the storage device 102 stores a program PG1 executed by the control device 103.
  • the storage device 102 also stores map data MP, vehicle-specific acoustic characteristic information DB, and user setting data US.
  • the map data MP includes at least one of information such as the topography of each region, the shape of the road, the number of lanes, the type of facilities (including forests, etc.) around the road, and predicted traffic volume by time of day.
  • the map data MP is not limited to being stored in the storage device 102, but may be acquired via the network N from a map data server (not shown) that distributes the map data MP, for example. Details of the vehicle-specific acoustic characteristic information DB and the user setting data US will be described later.
  • the storage device 102 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium).
  • Storage device 215 includes nonvolatile memory and volatile memory.
  • Nonvolatile memories include, for example, ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Memory). programmable Read Only Memory).
  • the volatile memory is, for example, RAM (Random Access Memory).
  • the storage device 102 is a portable recording medium that can be attached to and detached from the audio server 10, or a recording medium that can be written to or read from by the control device 103 via the network N (for example, cloud storage). Good too.
  • the control device 103 is composed of one or more processors that control each element of the audio server 10.
  • the control device 103 includes a CPU (Central Processing Unit), an SPU (Sound Processing Unit), a DSP (Digital Signal Processor), and an FPGA (Field Programming Unit). rammable Gate Array) or ASIC (Application Specific Integrated Circuit), etc. Consists of a processor.
  • the control device 103 controls the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and It functions as a change reception unit 116. Details of the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and the change reception section 116 will be described later.
  • FIG. 5 is a block diagram showing the configuration of the in-vehicle audio device 20A.
  • the vehicle-mounted audio device 20A will be described as an example, but the vehicle-mounted audio devices 20B to 20N have a similar configuration.
  • the in-vehicle audio device 20 is mounted on the vehicle C.
  • Vehicle-mounted audio device 20 includes a head unit 200, an amplifier 220, and a speaker 230.
  • the head unit 200 is provided in the instrument panel of the vehicle C, for example.
  • the head unit 200 includes a communication device 211 , an operating device 212 , a sound data acquisition device 213 , a microphone 214 , a storage device 215 , and a control device 216 .
  • the communication device 211 includes a communication interface for wide area communication network connection that can be connected to the network N using wireless communication, and communicates with the acoustic server 10 via the network N.
  • the communication device 211 receives output sound data Do from the audio server 10.
  • Communication device 211 is an example of a receiving device.
  • the operating device 212 receives operations performed by the user of the vehicle C.
  • the user of vehicle C is, for example, a passenger of vehicle C.
  • the operating device 212 is a touch panel.
  • the operation device 212 is not limited to a touch panel, but may be an operation panel having various operation buttons.
  • the sound data acquisition device 213 acquires sound data of the sound output by the in-vehicle audio device 20.
  • the sound data acquisition device 213 may be a reading device that reads sound data stored in a recording medium such as a CD (Compact Disc) or an SD card.
  • the sound data acquisition device 213 may be a radio broadcast or television broadcast receiving device.
  • the sound data acquisition device 213 may be a communication device that can be connected to a nearby electronic device (for example, a smartphone, a portable music player, etc.) using, for example, wireless communication or wired communication.
  • the sound data acquisition device 213 includes a communication interface for short-range communication (for example, Bluetooth (registered trademark), USB (Universal Serial Bus), etc.), and communicates with devices located nearby.
  • short-range communication for example, Bluetooth (registered trademark), USB (Universal Serial Bus), etc.
  • the sound data acquired by the sound data acquisition device 213 is hereinafter referred to as "acquired sound data Dsy.”
  • the microphone 214 picks up the sound inside the cabin of the vehicle C and generates sound data of the collected sound (hereinafter referred to as "picked-up data").
  • the sound data generated by the microphone 214 is output to the control device 216 of the head unit 200.
  • the microphones 214 are not limited to being provided in the head unit 200, but may be provided in multiple locations in the vehicle interior, or may be provided outside the vehicle. Additionally, the microphone 214 may be externally connected to the head unit 200.
  • the storage device 215 stores a program PG2 executed by the control device 216.
  • the storage device 215 may also store sound data.
  • the sound data stored in the storage device 215 may be, for example, sound data indicating a song or the like, or may be system sound output when the head unit 200 is operated.
  • the sound data stored in the storage device 215 will be referred to as "stored sound data Dsm" hereinafter.
  • the storage device 215 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium).
  • Storage device 215 includes nonvolatile memory and volatile memory.
  • Non-volatile memories are, for example, ROM, EPROM and EEPROM.
  • Volatile memory is, for example, RAM.
  • the storage device 215 is a portable recording medium that can be attached to and removed from the in-vehicle audio device 20, or a recording medium that can be written to or read from by the control device 216 via the network N (for example, cloud storage). It's okay.
  • the control device 216 is composed of one or more processors that control each element of the in-vehicle audio device 20.
  • the control device 216 is configured with one or more types of processors such as a CPU, SPU, DSP, FPGA, or ASIC.
  • the control device 216 is connected to a vehicle ECU (Electronic Control Unit) 50, a navigation device 52, and a camera 54.
  • Vehicle ECU 50 controls the operation of vehicle C. More specifically, the vehicle ECU 50 controls the drive mechanisms and brakes of the vehicle C, such as the engine or motor, based on the operating states of the operation mechanisms of the vehicle C, such as a steering wheel, shift lever, accelerator pedal, and brake pedal (not shown). etc. to control the braking mechanism.
  • Vehicle ECU 50 outputs system sound data Dss of vehicle C to control device 216. For example, when the shift lever is operated in reverse (R), the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is moving backward as the system sound data Dss. Further, for example, when the traveling speed of the vehicle C exceeds the speed limit, the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is overspeeding as the system sound data Dss.
  • the navigation device 52 searches for a route to a destination point set by the user and provides route guidance to the destination point. For example, the navigation device 52 displays a map around the current location of the vehicle C on its own display, and displays a mark indicating the current location of the vehicle C superimposed on the map. Furthermore, the navigation device 52 outputs a guidance voice that instructs the user about the direction of travel on the route to reach the destination point. Furthermore, the navigation device 52 may output a guidance voice indicating caution regarding traffic regulations, such as the speed limit of the road on which the vehicle C is traveling. The guidance voice of the navigation device 52 is output from the speaker 230. The navigation device 52 outputs guidance audio data Dsa, which is audio data corresponding to the guidance audio, to the control device 216. Further, the navigation device 52 may output position information of the vehicle C generated by a GPS (Global Positioning System) device, not shown, to the control device 216.
  • GPS Global Positioning System
  • the camera 54 captures an image of the interior of the vehicle C and generates image data. Image data generated by camera 54 is output to control device 216.
  • the camera 54 may capture not only images inside the vehicle interior but also images outside the vehicle.
  • the camera 54 may also serve as, for example, a drive recorder mounted on the vehicle C or an imaging device for a safety system of the vehicle C.
  • the control device 216 functions as a vehicle information transmitting section 251, a setting receiving section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255 by executing the program PG2. Details of the vehicle information transmitting section 251, setting receiving section 252, second transmission controlling section 253, receiving controlling section 254, and output controlling section 255 will be described later.
  • the amplifier 220 amplifies the sound data and supplies the amplified sound data to the speaker 230.
  • output sound data Do output from the control device 216 is input to the amplifier 220.
  • the speaker 230 outputs sound based on the output sound data Do.
  • a plurality of speakers 230 constitute a speaker set.
  • the arrangement of the plurality of speakers 230 differs depending on the vehicle C depending on the type of vehicle C or customization by the user.
  • the speaker 230 may be a single speaker.
  • FIG. 6 is a diagram illustrating the arrangement of the speakers 230 in the vehicle C.
  • Vehicle C includes seats P1 to P4.
  • Seat P1 and seat P2 are seats provided at the front of the cabin of vehicle C.
  • Seat P1 is the driver's seat
  • seat P2 is the passenger's seat.
  • Seat P3 and seat P4 are seats provided at the rear of the cabin of vehicle C.
  • Seat P3 is a seat located behind seat P1, which is a driver's seat
  • seat P4 is a seat located behind seat P2, which is a passenger seat.
  • the vehicle C also includes doors D1 to D4.
  • the door D1 is a door through which a passenger seated in the seat P1 gets on and off the vehicle. Note that the passenger is an example of a user.
  • Door D2 is a door through which a passenger seated in seat P2 gets on and off.
  • Door D3 is a door for a passenger seated in seat P3 to get on and off.
  • Door D4 is a door through which a passenger seated in seat P4 gets on and off.
  • Speakers 230A and 230B are provided on door D1. Speakers 230C and 230D are provided on door D2. Speaker 230E is provided on door D3. Speaker 230F is provided on door D4. In other words, speakers 230A and 230B are provided at locations corresponding to seat P1. Speakers 230C and 230D are provided at locations corresponding to seat P2. Speaker 230E is provided at a location corresponding to seat P3. Speaker 230F is provided at a location corresponding to seat P4.
  • the sound output from the speaker 230 includes, for example, acquired sound data Dsy acquired by the sound data acquisition device 213, stored sound data Dsm stored in the storage device 215, and system sound data output from the vehicle ECU 50. It includes a sound based on at least one of the guidance audio data Dss and the guidance audio data Dsa output from the navigation device 52.
  • These acquired sound data Dsy, stored sound data Dsm, system sound data Dss, or guidance sound data Dsa are sound data stored in the in-vehicle audio device 20A, and sound data output by a device connected to the in-vehicle audio device 20A.
  • the acquired sound data Dsy, stored sound data Dsm, system sound data Dss, and guidance sound data Dsa are hereinafter referred to as "local sound data Dsl.”
  • FIG. 3A is a diagram schematically showing an example of a data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • Distribution server 30 distributes sound data via network N.
  • the sound data distributed by the distribution server 30 is hereinafter referred to as "distributed sound data Dsn.”
  • the distribution server 30 distributes, via the network N, distribution sound data Dsn indicating, for example, the sounds of songs, environmental sounds, talk programs, news programs, or language learning materials.
  • the distribution server 30 is not limited to distributing sound data, but may also distribute video data including sound data.
  • the distribution server 30 is provided, for example, by an operator of a distribution service that distributes audio data (including video data). Although one distribution server 30 is illustrated in FIG. 3A, a plurality of distribution servers 30 may be provided. For example, a plurality of sound data distribution companies may each provide distribution servers 30.
  • the user When receiving the distribution sound data Dsn from the distribution server 30, the user selects the desired distribution sound data Dsn from among the plurality of distribution sound data Dsn distributed by the distribution server 30. More specifically, the in-vehicle audio device 20A (setting reception unit 252 (see FIG. 5) described later) obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and displays the list on the operating device 212 (touch panel). do. The user selects desired distribution sound data Dsn from the list displayed on the operating device 212.
  • the user does not specifically select the distributed sound data Dsn, but rather selects the attributes of the distributed sound data Dsn (the name of the creator of the sound data such as the artist name, the genre of the sound data, the situation suitable for the sound data, etc.). You may choose.
  • the in-vehicle audio device 20A (setting reception unit 252) sends information M (for example, song name, attribute, etc.) specifying the distributed sound data Dsn to the audio server 10 via the communication device 211. (S11).
  • the information M may include information specifying the format of sound data that can be reproduced by the in-vehicle audio device 20A.
  • the sound server 10 transmits information M to the distribution server 30 (S12). Based on information M, the distribution server 30 identifies the distribution sound data Dsn requested by the user from among the multiple distribution sound data Dsn. The distribution server 30 transmits the identified distribution sound data Dsn to the sound server 10 (S13). The sound server 10 transmits the distribution sound data Dsn to the in-vehicle sound device 20A (S14). At this time, the sound server 10 performs acoustic processing on the distribution sound data Dsn before transmitting it to the in-vehicle sound device 20A. That is, the sound server 10 acquires the distribution sound data Dsn as input sound data Di, and transmits the acoustically processed distribution sound data Dsn to the in-vehicle sound device 20A as output sound data Do.
  • FIG. 3B is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • the distribution server 30 and the in-vehicle audio device 20A are directly connected.
  • the in-vehicle audio device 20A obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and receives selection of desired distributed sound data Dsn from the user.
  • the in-vehicle audio device 20A transmits information M specifying the distribution sound data Dsn to the distribution server 30 via the communication device 211 (S21).
  • the distribution server 30 specifies the distribution sound data Dsn requested by the user based on the information M, and transmits the specified distribution sound data Dsn to the in-vehicle audio device 20A (S22).
  • the in-vehicle audio device 20A transmits the distributed sound data Dsn to the audio server 10 (S23).
  • This distributed sound data Dsn becomes input sound data Di.
  • the acoustic server 10 performs acoustic processing on the distributed sound data Dsn, and transmits it to the vehicle-mounted audio device 20A as output sound data Do (S24).
  • steps S21 and S22 may be executed by the user's smartphone instead of the in-vehicle audio device 20A.
  • a smartphone By using a smartphone, even a passenger riding in the rear seat P3 or seat P4 of the vehicle C can easily select the desired distributed sound data Dsn.
  • A-3 Functional configuration
  • A-3-1 Acoustic server 10
  • the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, a first transmission control section 115, and a change reception section 116. functions as In the following description, it is assumed that one in-vehicle audio device 20 to which the audio server 10 provides the output sound data Do is the in-vehicle audio device 20A.
  • the first acquisition unit 111 acquires input sound data Di.
  • the first acquisition unit 111 is an example of a data acquisition unit.
  • the input sound data Di is sound data corresponding to the sound output from the in-vehicle audio device 20A.
  • the first acquisition unit 111 acquires the input sound data Di using the following two methods.
  • [1] Acquire input sound data Di from the in-vehicle sound device 20A
  • the first acquisition unit 111 acquires the acquired sound data Dsy or the stored sound data Dsm from the in-vehicle sound device 20A via the network N.
  • the first acquisition unit 111 acquires the system sound data Dss or the guidance voice data Dsa from the in-vehicle sound device 20A via the network N.
  • the first acquisition unit 111 acquires both or either of the sound data stored in the in-vehicle sound device 20A and the sound data output by the device connected to the in-vehicle sound device 20A as the input sound data Di. In other words, the first acquisition unit 111 acquires the local sound data Dsl as the input sound data Di.
  • [2] Obtain input sound data Di from distribution server 30 If the user of the in-vehicle audio device 20A desires to use the distribution sound data Dsn, the first acquisition unit 111 distributes the distribution sound data Dsn via the network N. Input sound data Di is acquired from the distribution server 30. The method for acquiring the distributed sound data Dsn is as described using FIGS. 3A and 3B.
  • the second acquisition unit 112 acquires both or one of first information regarding the attribute of the input sound data Di and second information regarding one of the vehicle-mounted audio devices 20A among the plurality of vehicle-mounted audio devices 20.
  • the second acquisition unit 112 is an example of an information acquisition unit. Details of ⁇ 1> first information and ⁇ 2> second information will be described below.
  • the first information is information regarding attributes of the input sound data Di.
  • the attributes of the input sound data Di include, for example, ⁇ 1-1> information regarding the format of the input sound data Di and/or ⁇ 1-2> information regarding the content of the sound.
  • Information regarding the content of the sound indicates, for example, at least one of the song title, artist name, or music genre of the input sound data Di.
  • Information on the format of the input sound data Di is information that specifies the format of the input sound data Di.
  • MP3 MPEG-1 Audio Layer-3; lossy compression
  • AAC Advanced Audio Coding; lossy compression
  • FLAC Free Lossless Audio Codec; lossless compression
  • WAV-PCM Wideform formatting of uncompressed PCM data
  • main sound data formats For example, the format of the stored sound data Dsm and the guidance voice data Dsa may be different.
  • the format of the distribution sound data Dsn distributed by the distribution server 30 may differ for each distribution service of sound data.
  • the second acquisition unit 112 acquires information on the format of the input sound data Di as the first information. Specifically, the second acquisition unit 112 determines the format of the input sound data Di based on the extension of the input sound data Di acquired by the first acquisition unit 111, for example.
  • Information regarding the sound content of the input sound data Di such as the song title, artist name, music genre, etc.
  • Information such as the song title, artist name, music genre, etc. of the input sound data Di is, for example, if the input sound data Di is the music data. In this case, it is added as metadata to the input sound data Di.
  • the second acquisition unit 112 acquires information regarding the sound content of the input sound data Di, such as the song title, artist name, or music genre, based on the metadata of the input sound data Di acquired by the first acquisition unit 111, for example. .
  • the second information is information about the in-vehicle acoustic device 20A.
  • the information about the in-vehicle acoustic device 20A includes, for example, at least one of ⁇ 2-1> information indicating the acoustic characteristics of the in-vehicle acoustic device 20A (hereinafter referred to as "acoustic characteristic information") and ⁇ 2-2> information about the environment in which the in-vehicle acoustic device 20A is placed (hereinafter referred to as "environmental information").
  • the second acquisition unit 112 acquires acoustic characteristic information of the vehicle-mounted audio device 20A as second information.
  • the acoustic characteristic information of the vehicle-mounted audio device 20A is information indicating what kind of sound is heard by the user when the vehicle-mounted audio device 20A outputs sound based on predetermined sound data.
  • the acoustic characteristic information of the vehicle-mounted audio device 20A includes information regarding the performance of the vehicle-mounted audio device 20A, and information regarding the space (vehicle space) until the sound output from the vehicle-mounted audio device 20A (speaker 230) is heard by the user. including.
  • the second acquisition unit 112 measures, for example, the acoustic characteristics of the in-vehicle audio device 20A. More specifically, the second acquisition unit 112 transmits test sound data to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A outputs a sound corresponding to the test sound data (hereinafter referred to as "test sound") from the speaker 230.
  • test sound a sound corresponding to the test sound data
  • test sounds are collected using external microphones placed at the positions of seats P1 to P4, and the collected sound data is output to the in-vehicle audio device 20A.
  • the test sound may be collected by the microphone 214 of the head unit 200 instead of using an external microphone.
  • the in-vehicle audio device 20A transmits collected sound data to the audio server 10.
  • the second acquisition unit 112 estimates the acoustic characteristics of the in-vehicle audio device 20A by acquiring the collected sound data and analyzing the collected sound data. Note that the in-vehicle audio device 20A may estimate the acoustic characteristics based on the collected sound data.
  • the measurement of the acoustic characteristics may be performed in advance, for example, prior to using the in-vehicle audio device 20A.
  • the test sound is output and the collected sound data is transmitted.
  • the second acquisition unit 112 analyzes the collected sound data and estimates the acoustic characteristics of the in-vehicle audio device 20A.
  • the second acquisition unit 112 records the estimated acoustic characteristics as acoustic characteristic information in the vehicle-specific acoustic characteristic information DB (see FIG. 4).
  • the second acquisition unit 112 stores the acoustic characteristic information in the vehicle-specific acoustic characteristic information DB in association with the identification information for identifying the in-vehicle audio device 20A. That is, the vehicle-specific acoustic characteristic information DB includes information in which the identification information of the vehicle-mounted audio device 20A is associated with the acoustic characteristic information of the vehicle-mounted audio device 20A. The vehicle-specific acoustic characteristic information DB also stores acoustic characteristic information of other vehicle-mounted acoustic devices 20, such as the vehicle-mounted acoustic device 20B.
  • the second acquisition unit 112 searches the vehicle-specific acoustic characteristic information DB using the identification information of the in-vehicle acoustic device 20A as a key. Acoustic characteristic information of 20A can be acquired.
  • the measurement of the acoustic characteristics may be performed, for example, every time the in-vehicle audio device 20A is used. In this case, for example, the test sound is output and collected every time before the vehicle C starts traveling.
  • By measuring the acoustic characteristics each time the vehicle-mounted audio device 20A is used it is possible to estimate the acoustic characteristics by reflecting the in-vehicle environment each time the vehicle-mounted audio device 20A is used. For example, the number of occupants and the riding positions in vehicle C may vary from time to time.
  • the influence of sound absorption and reflection by the occupant's body can be reflected in the acoustic characteristics.
  • the second acquisition unit 112 may acquire at least one of information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C as acoustic characteristic information of the in-vehicle acoustic device 20A, rather than measuring the acoustic characteristics using a test sound.
  • the parameter determination unit 113 which will be described later, can estimate the acoustic characteristics of the in-vehicle acoustic device 20 by performing a simulation using the information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C.
  • the information regarding the performance of the in-vehicle audio device 20A includes, for example, the product numbers (model numbers) of the head unit 200, the speaker 230, and the amplifier 220. Further, the information regarding the specifications of the vehicle C is, for example, the vehicle type (including model number, grade, etc.) or cabin layout of the vehicle C in which the in-vehicle audio device 20A is mounted. Generally, by specifying the vehicle type (including model number, grade, etc.) of the vehicle C, it is possible to specify the interior layout and the material of the seat P arranged in the vehicle interior. On the other hand, if the user has retrofitted the speaker 230, for example, it is preferable to obtain information that can specify the actual cabin layout.
  • the cabin layout is information such as the dimensions of the cabin, the positions of the seats P1 to P4, and the positions of the speakers 230, for example. Further, the information regarding the occupants of the vehicle C includes information such as the number of occupants, the riding position (seated seat P), and the physique of the occupants.
  • the second acquisition unit 112 acquires, as the second information, environmental information related to the environment in which the vehicle-mounted sound device 20A is placed.
  • the environment in which the vehicle-mounted sound device 20A is placed is, for example, the vehicle C.
  • the environmental information includes, for example, information indicating the operation state of the vehicle C and vehicle information such as detection information of the sensor 74 mounted on the vehicle C.
  • At least one of the above-mentioned vehicle interior layout (information such as the dimensions of the vehicle interior, the positions of the seats P1 to P4, the position of the speaker 56, etc.) or information related to the occupants of the vehicle C (information such as the number of occupants, their riding positions (seats P in which they are seated), and the physiques of the occupants) may be acquired as the environmental information.
  • the environmental information includes, for example, information regarding sounds generated around the in-vehicle audio device 20A (hereinafter referred to as "ambient sounds").
  • the ambient sound is the sound generated around the vehicle-mounted audio device 20A, for example, the sound generated inside or outside the vehicle C.
  • the sounds generated inside the vehicle C include, for example, the sounds of conversations between passengers, the sounds of conversations between passengers using smartphones, etc., the sounds of devices used by passengers using electronic devices such as smartphones, and the like.
  • Sounds generated outside the vehicle C include, for example, the running noise generated by the vehicle C running, the running noise of other vehicles around the vehicle C, and the environmental sounds around the vehicle C (rain noise caused by rain, wind, etc.). sound, guidance sound of pedestrian signals, etc.).
  • the second acquisition unit 112 acquires, for example, sound data collected by the microphone 214 of the in-vehicle audio device 20A (hereinafter referred to as "sound data") as the environmental information. Further, the second acquisition unit 112 may acquire, for example, at least one of the traveling position and traveling speed of the vehicle C, and an image of the outside of the vehicle captured by the camera 54, as the environmental information. This information is used to estimate ambient sound.
  • sound data collected by the microphone 214 of the in-vehicle audio device 20A
  • the second acquisition unit 112 may acquire, for example, at least one of the traveling position and traveling speed of the vehicle C, and an image of the outside of the vehicle captured by the camera 54, as the environmental information. This information is used to estimate ambient sound.
  • the second acquisition unit 112 may acquire the environmental information from the in-vehicle audio device 20A, or may acquire the environmental information from a vehicle management server (not shown) that manages the vehicle C via the network N.
  • the vehicle management server receives information indicating the driving state of the vehicle C, information indicating the operation state of the vehicle C, and detection information of the sensor mounted on the vehicle C from a plurality of vehicles C traveling on the road via the network N. etc. to obtain.
  • the vehicle management server generates control data for controlling automatic driving in vehicle C, for example, using this information. Further, the vehicle management server may use this information to estimate, for example, the road congestion situation and distribute the congestion situation via the network N.
  • the ambient sound of vehicle C changes every moment. Therefore, the second acquisition unit 112 continuously acquires environmental information from the in-vehicle audio device 20A while transmitting the output sound data Do.
  • a parameter determining unit 113 which will be described later, re-determines the parameters of the sound processing when the environmental information changes.
  • the parameter determining unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information. Generally, when the acoustic processing to be performed on the input sound data Di is determined, one or more "types of parameters" to be determined in order to perform the acoustic processing are specified.
  • the "parameter” determined by the parameter determining unit 113 means a specific "parameter value” corresponding to one or more "parameter types.” For example, when the "parameter type" is the volume, the "parameter value” is a volume value that specifies the volume of the volume (hereinafter sometimes simply referred to as "volume").
  • the acoustic processing that the acoustic server 10 performs on the input sound data Di includes [A] acoustic adjustment processing, [B] environment adaptation processing, [C] volume adjustment processing, and [D] format conversion processing. Contains at least one of them.
  • the sound adjustment process is a process for improving the sound quality of the sound output from the in-vehicle audio device 20.
  • the sound adjustment process is, for example, various processes that are originally executed by a DSP for in-vehicle audio.
  • the space inside the vehicle C is limited, and the distances between the user and each speaker 230 are different.
  • sound is likely to be reflected by the window glass and sound is absorbed by the seat P, resulting in a situation where sound quality is likely to deteriorate.
  • the sound adjustment process is a process of adjusting the sound output from the in-vehicle audio device 20 so as to optimize the listening by the occupant seated in the seat P.
  • Time alignment is a process of changing the timing at which sound is output from each speaker 230 to focus the sound on the occupant of vehicle C (mainly the user sitting in the driver's seat).
  • the type of parameter is, for example, the output timing of sound in each speaker 230 (for example, the amount of delay of other speakers 230 with respect to the reference speaker 230).
  • An equalizer is a process that adjusts the sound balance by increasing or decreasing the gain (amplification of the input signal) for each frequency band.
  • the type of parameter is, for example, gain in each frequency band.
  • Crossover is a process of adjusting the output frequency band allocated to each speaker 230.
  • the type of parameter is, for example, a frequency band to be allocated to each speaker 230.
  • the parameter determination unit 113 determines parameters using the acoustic characteristic information of the in-vehicle audio device 20A acquired by the second acquisition unit 112.
  • the parameter determination unit 113 can directly determine the parameters from the acoustic characteristic information.
  • the acoustic characteristic information is at least one of information regarding the performance of the in-vehicle audio device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C
  • the parameter determining unit 113 performs simulation based on these information. are performed to estimate the acoustic characteristics of the in-vehicle audio device 20 and determine parameters.
  • parameters may be determined based on information such as the song title, artist name, music genre, etc. of the input sound data Di, which is the first information.
  • the parameter determining unit 113 determines equalizer processing parameters based on, for example, the music genre of the input sound data Di. For example, if the music genre is rock, the parameter determination unit 113 relatively increases the volume of the high range corresponding to the electric guitar sound and the volume of the low range corresponding to the kick and bass sounds. Further, when the music genre is pop music, the parameter determining unit 113 relatively increases the volume of the midrange corresponding to the vocal sound. That is, when performing equalizer processing, the type of parameter is, for example, relative volume for each frequency band.
  • the environment adaptation process is a process of adjusting the volume of sound data based on the ambient sound of the in-vehicle audio device 20A. For example, if construction is being carried out around vehicle C and noise is generated, or if vehicle C is running at a high speed and the running noise is loud, the parameter determination unit 113 increases the volume of the sound output from the speaker 230. do. Further, for example, when the passengers are talking with each other inside the vehicle, the parameter determining unit 113 may reduce the sound output from the speaker 230. Further, the parameter determination unit 113 may change the frequency of the sound output from the speaker 230 in accordance with the height (frequency) of the surrounding sound. That is, when executing the environment adaptation process, the type of parameter is, for example, the volume of the sound output from the speaker 230 or the frequency band of the sound output from the speaker 230.
  • the parameter determining unit 113 determines parameters using the environmental information acquired by the second acquiring unit 112. If the environmental information is sound data collected by the microphone 214, the parameter determining unit 113 analyzes the type and volume of the ambient sound from the sound data collected by the microphone 214. Then, the parameter determination unit 113 determines parameters based on the analysis results. Further, when the environmental information is at least one of the running position and speed of the vehicle C, and the image outside the vehicle captured by the camera 54, the parameter determining unit 113 determines the type and volume of the ambient sound based on these pieces of information. Estimate. Then, the parameter determination unit 113 determines parameters based on the estimation results.
  • the type and volume of ambient sound are estimated, for example, as follows.
  • the parameter determining unit 113 determines the road surface condition, predicted traffic volume, and surrounding environment (busy town, residential area, mountainous area, etc.) at the traveling position of vehicle C from the map data MP. Get the least one of the information. Further, the parameter determining unit 113 may obtain real-time traffic volume or weather information around the traveling position of the vehicle C via the network N. Further, when an image of the outside of the vehicle captured by the camera 54 is acquired, the parameter determination unit 113 detects at least one of the traffic volume, weather conditions, road surface conditions, and surrounding environment around the traveling position of the vehicle C by image analysis. .
  • the parameter determining unit 113 estimates the type and volume of ambient sound of the vehicle-mounted audio device 20A.
  • the parameter determining unit 113 estimates the volume of the traveling sound of the vehicle C based on the traveling speed. Generally, the faster the vehicle travels, the louder the sound of the vehicle travels.
  • the volume adjustment process is a process for adjusting the volume of each sound when two or more sounds are output simultaneously.
  • the volume adjustment process is executed, for example, when outputting a system sound such as a guidance sound from the navigation device 52 or a warning sound from the safety system of the vehicle C while outputting a sound such as a song.
  • a system sound such as a guidance sound from the navigation device 52 or a warning sound from the safety system of the vehicle C
  • the parameter determining unit 113 selects the distribution sound as a parameter.
  • the volume of the data Dsn and the volume of the guidance audio data Dsa are determined. More specifically, the parameter determining unit 113 determines each volume so that the volume of the music based on the distribution sound data Dsn is smaller than the volume of the guidance voice.
  • the presence or absence of sound output from the speakers 230A to 230F may be changed based on whether or not an occupant is in the seats P1 to P4.
  • the second acquisition unit 112 acquires information indicating the presence or absence of an occupant in each seat P1 to P4 of the vehicle C as second information.
  • the information indicating the presence or absence of an occupant may be, for example, an image captured by the camera 54 inside the vehicle, or may be a detection result of a seating sensor (not shown) provided in each of the seats P1 to P4.
  • the parameter determining unit 113 sets the volume output from the speaker 230 corresponding to the seat P where no passenger is sitting to zero or lower than normal.
  • the format conversion process is a process of converting the input sound data Di into a format that can be played by the in-vehicle audio device 20A.
  • a dedicated application is used for the distribution service provided by the distribution server 30.
  • the user needs to install dedicated applications for each, and perform tasks such as updating the dedicated applications as necessary.
  • the distributed sound data Dsn can be used without installing a dedicated application for each distribution service on the in-vehicle audio device 20A.
  • formats that can be played by the in-vehicle audio device 20A include, for example, the above-mentioned MP3, AAC, FLAC, WAV-PCM, and the like. More preferably, the format that can be played back by the in-vehicle audio device 20A (format after format conversion processing) is FLAC (lossless compression) or WAV-PCM (uncompressed), which has a light decompression (decoding, decoding) processing load and does not degrade sound quality. ) is preferred.
  • the parameter determination unit 113 performs a format conversion process based on information regarding the format of the input sound data Di, which is first information, and information that specifies the format of sound data that can be played by the in-vehicle audio device 20A, which is included in the information M. Determine whether or not it is necessary. Specifically, when the input sound data Di is in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines that format conversion processing is unnecessary. Furthermore, if the input sound data Di is not in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines to convert the input sound data Di into a format that can be played by the in-vehicle audio device 20. That is, the parameter determining unit 113 determines, as parameters, whether format conversion processing is necessary for the input sound data Di, and the destination format if format conversion processing is necessary.
  • the output sound generation unit 114 processes the input sound data Di using the parameters determined by the parameter determination unit 113 to generate output sound data Do used in the in-vehicle audio device 20A.
  • the output sound generation unit 114 generates output sound data by performing at least one of acoustic adjustment processing, environment adaptation processing, volume adjustment processing, or format conversion processing on the input sound data Di acquired by the first acquisition unit 111. Generate Do.
  • the first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N.
  • the first transmission control section 115 is an example of a data transmission control section.
  • the output sound data Do transmitted by the first transmission control section 115 is received by the reception control section 254 of the in-vehicle audio device 20A.
  • the change reception unit 116 receives parameter changes from the user using the in-vehicle audio device 20A while the output sound data Do is being transmitted.
  • the change reception unit 116 is an example of a reception unit.
  • the parameter change may be, for example, a change in volume, or a relationship between a frequency band and gain in an equalizer (such as increasing the bass range).
  • the change accepting unit 116 accepts a parameter change
  • the parameter determining unit 113 changes the parameters used for audio processing to the parameters set by the user. As a result, audio processing is performed in accordance with changes made by the user.
  • the change receiving unit 116 receives a parameter change, the content of the change is reflected in the subsequent audio processing.
  • the user changes the parameters while transmitting the first output sound data Do, which is an example of the output sound data Do.
  • the first output sound data Do is data generated by performing acoustic processing on the first input sound data Di, which is an example of the input sound data Di.
  • the change reception unit 116 associates the identification information of the in-vehicle audio device 20A, the identification information of the first input sound data Di, and the parameters before and after the change, and stores them in the storage device 102 as user setting data US. do.
  • the parameter determination unit 113 When transmitting output sound data Do based on the first input sound data Di to the in-vehicle audio device 20A from the next time onward, the parameter determination unit 113 reads the parameters changed by the user from the user setting data US. The output sound generation unit 114 performs acoustic processing using the parameters read from the user setting data US to generate output sound data Do.
  • parameter changes made by the user are not limited to being reflected in the first input sound data Di itself, but may also be reflected in the same type of input sound data Di as the first input sound data Di.
  • the first input sound data Di is music data, and the music genre is rock.
  • acoustic processing may be performed using the parameters changed by the user.
  • the parameter values changed by the user may be used for subsequent acoustic processing of the guidance voice data Dsa.
  • the audio server 10 may aggregate parameter changes received from users of each of the plurality of vehicle-mounted audio devices 20A to 20N, and reflect the results in parameter determination by the parameter determination unit 113. For example, if many users make similar parameter changes to the output sound data Do generated based on the second input sound data Di, which is an example of the input sound data Di, the parameter determination unit There is a possibility that the parameters determined by 113 do not match the preferences of many users. In this case, the parameter determination unit 113 determines the parameters to be used for the acoustic processing of the second input sound data Di to be the parameters changed by many users. This makes it possible to realize audio processing that reflects the tastes or trends of many users.
  • A-3-2 In-vehicle audio device 20
  • the control device 216 of the in-vehicle audio device 20 functions as a vehicle information transmitting section 251, a setting accepting section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255.
  • the vehicle information transmitter 251 transmits at least either the acoustic characteristic information of the vehicle-mounted audio device 20A or the environmental information of the vehicle C to the acoustic server 10. At least either the acoustic characteristic information of the in-vehicle audio device 20A or the environmental information of the vehicle C transmitted from the vehicle information transmitting section 251 is acquired by the second acquiring section 112 of the acoustic server 10.
  • the setting receiving unit 252 selects a desired distribution sound from among the plurality of distribution sound data Dsn distributed by the distribution server 30. Accepts selection of data Dsn. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
  • the setting receiving unit 252 receives changes in sound processing parameters from the user using the in-vehicle audio device 20A while outputting sound based on the output sound data Do.
  • the parameter change may be, for example, a change in volume, or the relationship between the frequency band and gain in an equalizer (such as making the bass range larger), or other changes. It may also be a change in the parameters.
  • the setting reception unit 252 transmits the contents of the parameter change received from the user to the audio server 10.
  • the second transmission control unit 253 transmits the local sound data Dsl to the audio server 10.
  • the local sound data Dsl transmitted by the second transmission control unit 253 includes, for example, acquired sound data Dsy or stored sound data Dsm that is instructed to be output by the user, system sound data Dss output from the vehicle ECU 50, and output from the navigation device 52. This is the guidance audio data Dsa.
  • the reception control unit 254 receives the output sound data Do from the audio server 10 via the network N.
  • the output sound data Do is data obtained by performing acoustic processing on the local sound data Dsl, or data obtained by performing acoustic processing on the distributed sound data Dsn.
  • the output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the amplifier 220.
  • Amplifier 220 amplifies output sound data Do and outputs it to speaker 230.
  • the speaker 230 outputs sound based on the output sound data Do.
  • FIG. 7 is a flowchart showing the operation of the control device 103 of the sound server 10.
  • various data may be transmitted and received in file units or in packet units.
  • the control device 103 functions as a first acquisition unit 111 and acquires input sound data Di from the in-vehicle sound device 20A or the distribution server 30 (step S20).
  • the control device 103 functions as a second acquisition unit 112 and acquires at least one of first information on the attributes of the input sound data Di and second information on the in-vehicle sound device 20A (step S21).
  • the control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S22).
  • the control device 103 functions as an output sound generation unit 114, and generates output sound data Do used in the in-vehicle audio device 20A by performing acoustic processing on the input sound data Di using the parameters determined in step S22 ( Step S23).
  • the control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S24).
  • the control device 103 functions as the second acquisition unit 112 and acquires environmental information, which is an example of second information, from the in-vehicle audio device 20A (step S25).
  • Control device 103 determines whether the ambient sound of vehicle-mounted audio device 20A has changed based on the environmental information (step S26). If the ambient sound does not change (step S26: NO), the control device 103 advances the process to step S28. On the other hand, if the ambient sound has changed (step S26: YES), the control device 103 functions as the parameter determining unit 113, and changes the sound processing parameters to match the changed ambient sound (step S27). Note that if the environmental information is not acquired in step S21 or if the environmental information is not used to determine the parameters in step S22, the control device 103 may skip the processing from steps S25 to S27.
  • the control device 103 functions as the change reception unit 116, and receives changes to the parameters of the sound processing from the user (step S28). If there is no parameter change from the user (step S28: NO), the control device 103 advances the process to step S30. On the other hand, if a parameter change is accepted from the user (step S28: YES), the control device 103 functions as the parameter determining unit 113, and changes the parameter according to the user's change (step S29). Until the transmission of the output sound data Do based on the input sound data Di acquired in step S20 is completed (step S30: NO), the control device 103 returns the process to step S23. When the transmission of the output sound data Do is completed (step S30: YES), the control device 103 returns the process to step S20.
  • the acoustic server 10 generates the output sound data Do by performing acoustic processing on the input sound data Di, and generates the output sound data Do. Do is transmitted to the in-vehicle audio device 20A. Therefore, it is not necessary to arrange a control device for performing sound processing in the in-vehicle audio device 20A. Therefore, the configuration of the vehicle-mounted audio device 20A is simplified, and as a result, the cost of the vehicle-mounted audio device 20A is reduced.
  • the acoustic server 10 determines parameters used for acoustic processing on the input sound data Di based on at least one of first information regarding the attributes of the input sound data Di and second information regarding the in-vehicle audio device 20A. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data Do is improved.
  • the acoustic server 10 acquires information indicating the acoustic characteristics of the in-vehicle audio device 20A as second information. Therefore, since the acoustic characteristics of the in-vehicle audio device 20A are reflected in the acoustic processing parameters, the audio server 10 can perform acoustic processing suitable for using the output sound data Do in the in-vehicle audio device 20A.
  • the acoustic server 10 acquires environmental information of the in-vehicle audio device 20A as second information. Therefore, since the ambient sound of the in-vehicle audio device 20A is reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where ambient sounds are generated. Can be done.
  • the acoustic server 10 continuously acquires environmental information while transmitting the output sound data Do, and re-determines the parameters when the environmental information changes. Therefore, since changes in the ambient sound are reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where the ambient sound changes from moment to moment. .
  • the sound server 10 also determines whether or not the input sound data Di needs to be converted into a format, and if so, what format to convert it to. This allows the user to use the input sound data Di in the in-vehicle sound device 20A without being aware of the format of the data.
  • the audio server 10 accepts changes to audio processing parameters from the user. Therefore, the audio server 10 can perform audio processing that reflects the user's preferences or situations that are not reflected in the first information or the second information.
  • the audio server 10 obtains input sound data Di from the distribution server 30 that distributes sound data via the network N. Therefore, the user can use various sound data other than the local sound data Dsl of the in-vehicle audio device 20A on the in-vehicle audio device 20A, and the user's convenience can be improved.
  • the audio server 10 acquires input sound data Di from the in-vehicle audio device 20A. Therefore, since the audio server 10 can perform acoustic processing on the sound data acquired from the in-vehicle audio device 20A, the processing load on the in-vehicle audio device 20A is reduced compared to the case where the in-vehicle audio device 20A performs audio processing. is reduced.
  • the audio server 10 acquires, as input sound data Di, at least one of the sound data stored in the in-vehicle audio device 20A and the sound data output from the equipment connected to the in-vehicle audio device 20A.
  • the audio server 10 performs audio processing on at least one of the sound data stored in the vehicle-mounted audio device 20A and the sound data output from the equipment connected to the vehicle-mounted audio device 20A. Therefore, the audio server 10 can improve user convenience.
  • the audio server 10 generates output sound data Do used by the in-vehicle audio device 20A that outputs sound into the cabin of the vehicle C. Therefore, the acoustic server 10 can improve the sound quality of the sound output into the cabin of the vehicle C, which has a poor sound listening environment unlike inside a building.
  • FIG. 8 is a block diagram showing the configuration of the acoustic server 10 in the second embodiment.
  • the control device 103 of the acoustic server 10 functions as a vehicle sound generation section 117 in addition to the functional configuration in the first embodiment.
  • the vehicle sound generation unit 117 generates vehicle sound data indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Further, the first transmission control unit transmits the vehicle sound data to the in-vehicle audio device 20A via the network N.
  • the vehicle sound data includes, for example, [1] engine sound data Dse indicating a virtual engine sound, or [2] alarm sound data Dsk notifying surrounding obstacles or the like.
  • Engine sound data Dse For example, when the vehicle C is an electric vehicle powered by an electric motor, a virtual engine sound may be output in order to evoke a sense of driving in the user riding the vehicle C.
  • the vehicle sound generation unit 117 generates engine sound data Dse corresponding to the engine sound output from the vehicle-mounted audio device 20A.
  • the second acquisition unit 112 acquires the traveling speed information and accelerator opening information of the vehicle C from the in-vehicle audio device 20A as the second information.
  • the traveling speed information is an example of information indicating the traveling state of the vehicle C.
  • the accelerator opening degree information is an example of information indicating the operating state of the vehicle C.
  • engine sound data Dse is stored in the storage device 102 of the acoustic server 10.
  • the engine sound data Dse includes a plurality of engine sound data Dse_1 to Dse_25 (see FIG. 9).
  • the vehicle sound generation unit 117 selects one engine sound data Dse from among the plurality of engine sound data Dse_1 to Dse_25.
  • the first transmission control unit 115 transmits the selected engine sound data Dse to the in-vehicle audio device 20A as vehicle sound data.
  • the vehicle sound generation unit 117 determines the virtual engine rotation speed of the vehicle C (hereinafter referred to as "virtual engine rotation speed") based on the traveling speed information of the vehicle C.
  • the vehicle sound generation unit 117 determines the virtual engine rotation speed based on reference information (not shown) indicating the correspondence between the traveling speed of the vehicle C and the virtual engine rotation speed, for example.
  • the vehicle sound generation unit 117 selects one engine sound data Dse from the plurality of engine sound data Dse_1 to Dse_25 based on the virtual engine rotation speed and accelerator opening information.
  • FIG. 9 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. Map F is stored in the storage device 102, for example. Although FIG. 9 shows a case where the number of engine sound data Dse is 25, the number of engine sound data Dse is not limited to 25. Vehicle sound generation unit 117 identifies one piece of engine sound data Dse in map F that corresponds to a region where the virtual engine speed of vehicle C and the accelerator opening information intersect.
  • Vehicle sound generation section 117 reads out one piece of engine sound data Dse from storage device 102 .
  • the first transmission control unit 115 transmits one piece of engine sound data Dse read out by the vehicle sound generation unit 117 to the vehicle-mounted audio device 20A as vehicle sound data.
  • Alarm sound data Dsk In the first embodiment, various alarm sounds accompanying the travel of the vehicle C were included in the system sound data Dss output from the vehicle ECU 50.
  • the vehicle sound generation unit 117 generates the alarm sound data Dsk based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. As shown in FIG. 8, alarm sound data Dsk is stored in the storage device 102 of the acoustic server 10.
  • the alarm sound data Dsk includes a plurality of alarm sound data Dsk.
  • the vehicle sound generation unit 117 selects one alarm sound data Dsk from among the plurality of alarm sound data Dsk.
  • the first transmission control unit 115 transmits the selected alarm sound data Dsk to the vehicle-mounted audio device 20A as vehicle sound data.
  • the second acquisition unit 112 acquires information indicating the operating state of the shift lever as information indicating the operating state of the vehicle C.
  • the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is moving backward from the alarm sound data Dsk, and generates the vehicle sound. It is transmitted as data to the in-vehicle audio device 20A.
  • the second acquisition unit 112 acquires traveling speed information of the vehicle C as information indicating the traveling state of the vehicle C.
  • the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is overspeeding from among the plurality of alarm sound data Dsk, and It is transmitted as sound data to the in-vehicle audio device 20A.
  • At least one of the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C may be used for acoustic processing of the input sound data Di.
  • information indicating the operating position of the shift lever is acquired as information indicating the operating state of vehicle C, and that the shift lever is operated in reverse (R).
  • R the shift lever is operated in reverse
  • the volume of sound data other than the system sound may be reduced.
  • the running speed information of the vehicle C is acquired as information indicating the running state of the vehicle C, and that the vehicle C is accelerating.
  • the volume of the speaker 230 may be increased in response to the increase in the running sound of the vehicle C.
  • the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C are not limited to being acquired from the in-vehicle audio device 20A.
  • the operation state of the vehicle C or the running state of the vehicle C may be detected from an image of a surveillance camera placed on a road or an image of a camera mounted on another vehicle C.
  • the acoustic server 10 acquires at least one of the information indicating the operation state of the vehicle C and the information indicating the running state of the vehicle C as the second information. Therefore, the audio server 10 can reflect at least one of the operating state of the vehicle C and the running state of the vehicle C in the audio processing of the input sound data Di.
  • the audio server 10 generates a vehicle sound indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Generate data. Therefore, the acoustic server 10 can reduce the processing load on the vehicle ECU 50 compared to the case where the vehicle ECU 50 generates vehicle sound data.
  • FIG. 10 is a block diagram showing the configuration of the in-vehicle sound device 20A in the third embodiment.
  • the control device 216 of the in-vehicle sound device 20A functions as a setting acceptance unit 252, a reception control unit 254, and an output control unit 255.
  • the control device 216 of the in-vehicle sound device 20A does not function as a vehicle information transmission unit 251 and a second transmission control unit 253. That is, in the third embodiment, the local sound data Dsl is not transmitted from the in-vehicle sound device 20A to the sound server 10. Also, in the third embodiment, the acoustic characteristic information and environmental information are not transmitted from the in-vehicle sound device 20A to the sound server 10.
  • the in-vehicle audio device 20A includes an audio control device 240 between the control device (main control device) 216 and the amplifier 220.
  • the sound control device 240 is configured with a processor having lower performance than the control device 216 of the head unit 200.
  • the sound control device 240 adjusts the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl.
  • the sound based on the local sound data Dsl is at least one of the sound based on the sound data stored in the vehicle-mounted audio device 20A and the sound based on the sound data output by the device connected to the vehicle-mounted audio device 20A.
  • the setting reception unit 252 selects desired distributed sound data Dsn from among the plurality of distributed sound data Dsn distributed by the distribution server 30. accept. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
  • the reception control unit 254 receives the output sound data Do from the acoustic server 10 via the network N.
  • the output sound data Do is data obtained by performing acoustic processing on the distribution sound data Dsn.
  • the output sound generation unit 114 of the audio server 10 performs format conversion processing on the input sound data Di to generate output sound data Do. More specifically, the output sound generation unit 114 of the audio server 10 generates the output sound data Do by converting the input sound data Di, which is the distributed sound data Dsn, into a format that can be played by the audio control device 240.
  • the output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the audio control device 240.
  • the acoustic control device 240 reproduces the output sound data Do and outputs it to the amplifier 220.
  • Amplifier 220 amplifies output sound data Do and outputs it to speaker 230.
  • the speaker 230 outputs sound based on the output sound data Do.
  • the output control unit 255 outputs the local sound data Dsl to the audio control device 240.
  • the sound control device 240 processes each sound data so as to adjust the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl, and outputs the processed sound data to the amplifier 220. Specifically, when outputting the guidance sound of the navigation device 52 (guidance sound based on the guidance sound data Dsa) while outputting the playback sound based on the output sound data Do, the sound control device 240 adjusts the volume of the playback sound. The guidance voice is output after making it smaller.
  • the in-vehicle audio device 20A includes a sound control device 240 that adjusts the balance between the sound based on the local sound data Dsl and the sound based on the output sound data Do.
  • the audio server 10 generates output sound data Do by converting the input sound data Di into a format that can be played by the audio control device 240.
  • the user can use the distributed sound data Dsn without being aware of the format that can be played by the audio control device 240. Further, since it is not necessary to install a dedicated application for each distribution server 30 and to update the version of the dedicated application, for example, user convenience can be improved.
  • FIG. 11 is a diagram illustrating the configuration of the information processing system 2 according to the fourth embodiment. Similar to the information processing system 1, the information processing system 2 includes a plurality of in-vehicle audio devices 20 (20A to 20N). Information processing system 2 includes an audio server 10, an in-vehicle audio device 20A, and a smartphone 40. In the fourth embodiment, the audio server 10 is an example of an information processing device that performs processing regarding a plurality of output sound data Do used in each of the plurality of vehicle-mounted audio devices 20A to 20N.
  • the audio server 10 determines the parameters used for audio processing, and performs audio processing on the input sound data Di to generate the output sound data Do.
  • the audio server 10 determines the parameters used for audio processing, and also determines the audio processing device that performs the audio processing.
  • the sound processing device can be, for example, at least one of the sound server 10, the smartphone 40, and the vehicle-mounted sound device 20A.
  • the acoustic server 10, smartphone 40, and vehicle-mounted audio device 20A that are candidates for the acoustic processing device will be referred to as "candidate devices.”
  • the smartphone 40 and the in-vehicle audio device 20A are capable of performing at least some of the above-described audio processing.
  • the control device of the smartphone 40 and the control device 216 of the in-vehicle audio device 20A have a program installed therein for performing at least some of the audio processing, and use parameters transmitted from the audio server 10 to process the sound. It is possible to perform acoustic processing.
  • the smartphone 40 is an electronic device carried by a user who uses the vehicle C and the in-vehicle audio device 20A.
  • the smartphone 40 is an example of another device different from the audio server 10.
  • the smartphone 40 communicates with the in-vehicle audio device 20A in the vehicle C using short-range wireless communication such as Bluetooth (registered trademark).
  • Bluetooth registered trademark
  • a music distribution application is installed on the smartphone 40, and the distribution sound data Dsn can be acquired from the distribution server 30 (see FIG. 2, etc.).
  • the distributed sound data Dsn acquired by the smartphone 40 is transmitted, for example, to the in-vehicle audio device 20A, and output from the speaker of the in-vehicle audio device 20A.
  • FIG. 12 is a block diagram showing the configuration of the acoustic server 10 in the fourth embodiment.
  • the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, and a first transmission control section 115. 118 and a third transmission control section 119.
  • the second acquisition unit 112 and the parameter determination unit 113 function in the same manner as in the first embodiment.
  • the second acquisition unit 112 acquires at least one of first information regarding the attribute of the input sound data Di and second information regarding the in-vehicle audio device 20A.
  • the second acquisition unit 112 is an example of an information acquisition unit.
  • the parameter determination unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information.
  • the device determining unit 118 determines a sound processing device that performs sound processing on the input sound data Di, based on at least one of the first information and the second information. For example, the device determining unit 118 may obtain, as an example of the first information, the terms of use of the distributed sound data Dsn (whether or not the audio server 10 can obtain the input sound data Di).
  • the terms of use of a distribution service that distributes distributed sound data Dsn it is stipulated that only devices used by users who have registered with the distribution service (for example, smartphone 40 or in-vehicle audio device 20A) can acquire distributed sound data Dsn. There are cases. In this case, the audio server 10 cannot acquire the distributed sound data Dsn and cannot function as an audio processing device.
  • the device determining unit 118 determines the smartphone 40 or the in-vehicle audio device 20A as the sound processing device.
  • the device determining unit 118 may obtain, for example, information indicating the communication status between the in-vehicle audio device 20A and the audio server 10 as the second information. If the communication condition between the in-vehicle audio device 20A and the audio server 10 is poor, there is a possibility that a delay will occur in the transmission of the output sound data Do after the audio processing. Therefore, the device determining unit 118 determines that the in-vehicle audio device 20A or the smartphone 40 performs the audio processing.
  • the device determining unit 118 may determine the sound processing device based on, for example, information regarding the audio server 10, the smartphone 40, and the in-vehicle audio device 20A (hereinafter referred to as "candidate device information").
  • the candidate device information is, for example, information regarding the performance of the candidate device. Specifically, it is, for example, the product number (model number) of the candidate device or the components used in the candidate device (control device, recording device, etc.).
  • the device determining unit 118 determines that the candidate device performs the audio processing when the candidate device has data processing capability capable of performing the audio processing. If the candidate device has low processing capacity, delays may occur if the candidate device performs audio processing that requires a high load. By obtaining information regarding the performance of the candidate device, it is possible to appropriately set the acoustic processing load to be performed by the candidate device.
  • the candidate device information may be, for example, the real-time operating status (processing load) of the candidate device. If the candidate device is performing processing with a high load other than audio processing, if audio processing is further imposed, there is a possibility that processing will be delayed. Therefore, the device determining unit 118 may determine, among the candidate devices, the candidate device with the smallest current processing load as the audio processing device.
  • the sound processing device is the sound server 10
  • the sound processing device is a device other than the sound server 10
  • the sound processing device is a plurality of devices
  • the first acquiring unit 111 acquires the input sound data Di.
  • the output sound generation unit 114 performs acoustic processing on the input sound data Di using the parameters determined by the parameter determination unit 113, thereby generating output sound data Do used in the vehicle-mounted audio device 20A.
  • the first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N.
  • the third transmission control unit 119 uses the parameters determined by the parameter determining unit 113. to other devices.
  • the third transmission control section 119 is an example of a parameter transmission control section.
  • the input sound data Di may be transmitted to another device together with the parameters.
  • Other devices generate output sound data Do by subjecting input sound data Di to acoustic processing using parameters.
  • the smartphone 40 is a sound processing device
  • the output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A.
  • the acoustic processing includes a plurality of processes (steps)
  • a series of processes may be shared among a plurality of devices. This makes it possible to avoid concentration of processing load on a specific device.
  • the audio server 10 may perform some of the plurality of processes, and the remaining processes may be performed by another device (for example, the smartphone 40).
  • the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process. It is assumed that the device determining unit 118 determines that the acoustic server 10 executes the first process and that the smartphone 40 executes the second process.
  • the smartphone 40 is an example of a device other than the audio server 10.
  • the output sound generation unit 114 performs first processing on the input sound data Di using the first parameter to generate partially processed data.
  • the first transmission control unit 115 transmits the partially processed data and the second parameter to the smartphone 40.
  • the smartphone 40 performs second processing on the partially processed data using the second parameter, and generates output sound data Do.
  • the output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A, and is output from the speaker 230 of the in-vehicle audio device 20A.
  • the plurality of devices that share sound processing may be the smartphone 40 and the vehicle-mounted audio device 20A.
  • the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process.
  • the device determining unit 118 determines that the first process will be performed by the smartphone 40, and determines that the second process will be performed by the in-vehicle audio device 20A.
  • the smartphone 40 is an example of a first sound processing device
  • the in-vehicle audio device 20A is an example of a second sound processing device.
  • the third transmission control unit 119 transmits the first parameter to the smartphone 40 and the second parameter to the in-vehicle audio device 20A.
  • the input sound data Di may be transmitted to the smartphone 40 together with the parameters.
  • the smartphone 40 performs a first process on the input sound data Di using the first parameter, and generates partially processed data.
  • the smartphone 40 transmits the partially processed data to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A performs second processing on the partially processed data using the second parameter, and generates output sound data Do.
  • the output sound data Do is output from the speaker 230 of the vehicle-mounted audio device 20A.
  • Both the first parameter and the second parameter may be transmitted to the smartphone 40. In this case, the second parameter is transmitted to the in-vehicle audio device 20A together with the partially processed data.
  • FIG. 13 is a flowchart showing the operation of the control device 103 of the sound server 10 in the fourth embodiment.
  • various data may be transmitted and received in file units or in packet units.
  • the control device 103 functions as the second acquisition unit 112, and acquires at least one of first information related to the attributes of the input sound data Di and second information related to the in-vehicle sound device 20A (step S50).
  • the control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S51). Further, the control device 103 functions as the device determining unit 118, and determines a sound processing device based on at least one of the first information and the second information (step S52).
  • step S53 NO
  • the control device 103 functions as the third transmission control unit 119
  • the parameters determined in step S51 are transmitted to another device (smartphone 40 or in-vehicle audio device 20A) (step S54). After that, the control device 103 returns the process to step S50.
  • the control device 103 functions as the first acquisition unit 111 and acquires the input sound data Di (step S55).
  • the control device 103 functions as the output sound generation section 114, and the control device 103 functions as the output sound generation section 114,
  • the output sound data Do is generated by subjecting the input sound data Di to acoustic processing using the parameters obtained (step S57).
  • the control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S58). After that, the control device 103 returns the process to step S50.
  • the control device 103 functions as the output sound generation unit 114, Partial audio processing (for example, first processing) handled by the audio server 10 is applied to the input sound data Di to generate partially processed data (step S59). At this time, some of the parameters determined in step S51 (for example, the first parameter) are used.
  • the control device 103 functions as the first transmission control unit 115, and transmits the partially processed data and parameters (for example, second parameters) used in the processing handled by the other device to the other device (step S60). ). After that, the control device 103 returns the process to step S50.
  • the audio server 10 determines the audio processing device that performs audio processing on the input sound data Di, based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
  • the processing load on the in-vehicle audio device 20A or the smartphone 40 can be reduced compared to performing the audio processing in the in-vehicle audio device 20A or the smartphone 40.
  • the configuration of the in-vehicle audio device 20A or the smartphone 40 is simplified because the audio server 10 performs the high-load acoustic processing that cannot be handled unless the in-vehicle audio device 20A or smartphone 40 is equipped with a control device dedicated to audio processing. As a result, the cost of the in-vehicle audio device 20A or the smartphone 40 is reduced.
  • the processing load of the audio processing can be distributed. Therefore, concentration of processing load on a specific device is avoided.
  • the sound output device is the vehicle-mounted audio device 20A to 20N
  • the sound output device is not limited to this, and may be any electronic device that can use sound data.
  • the sound output device may be an electronic device that is carried and used by the user.
  • the electronic device carried and used by the user may be, for example, a smartphone, a portable audio player, a personal computer, a tablet terminal, a smart watch, or the like. These electronic devices either have built-in speakers or have external speakers or earphones attached.
  • the second acquisition unit 112 of the acoustic server 10 acquires the output characteristics of the earphones connected to the smartphone as second information.
  • the parameter determining unit 113 determines parameters for sound processing in accordance with the output characteristics of the earphone. Thereby, the quality of the sound output from the earphones can be improved.
  • the second acquisition unit 112 acquires the position information of the earphone (or smartphone) as second information.
  • the parameter determining unit 113 determines parameters for sound processing according to the position of the earphone. Specifically, the parameter determining unit 113 increases the gain in the bass range when the earphone is outdoors, and lowers the volume when the earphone is indoors. As a result, it is possible to perform acoustic processing according to the listening location, and it is possible to improve the sound quality of the sound output from the earphones, as well as improve the ease of hearing the sound output from the earphones. .
  • processing resources conventionally devoted to sound processing can be used for other processing, and it is possible to prevent a decline in smartphone performance due to the use of sound data.
  • the smartphone's control device does not perform audio processing
  • the smartphone's power consumption can be reduced.
  • the smartphone control device does not perform sound processing, there is no need to use a high-spec control device for processing sound data, and costs can be reduced.
  • Functions of the acoustic server 10 are realized by cooperation of one or more processors forming the control device 103 and the program PG1 stored in the storage device 102.
  • the functions of the in-vehicle audio devices 20A to 20N constitute the control device 216. This is realized by the cooperation of one or more processors and the program PG2 stored in the storage device 215.
  • the above program may be provided in a form stored in a computer-readable recording medium and installed on a computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but any known recording medium such as a semiconductor recording medium or a magnetic recording medium can be used. Also included are recording media in the form of. Note that the non-transitory recording medium includes any recording medium excluding transitory, propagating signals, and does not exclude volatile recording media. Further, in a configuration in which a distribution device distributes a program via the network N, a recording medium that stores a program in the distribution device corresponds to the above-mentioned non-transitory recording medium.
  • At least one of A and B or “at least one of A or B” means “(A), (B), or (A and B)". B).
  • at least one of A and B means “one or more of A and B” or “at least one selected from the group of A and B” ( This is rephrased as at least one selected from the group of A and B).
  • At least one of A, B, and C (“at least one of A, B and C” or “at least one of A, B or C”) means “(A), (B), (C ), (A and B), (A and C), (B and C), or (A, B and C).
  • at least one of A, B, and C means “one or more of A, B, and C,” or "one or more of A, B, and C.” "at least one selected from the group of A, B, and C”.
  • An information processing device is an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and includes data acquisition for acquiring input sound data.
  • an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of second information; and performing the acoustic processing on the input sound data using the parameter.
  • a data transmission control section that transmits the output sound data to the first sound output device via a network.
  • the information processing device generates output sound data by performing acoustic processing on input sound data, and transmits the output sound data to one sound output device. Therefore, it is not necessary to arrange a control device for performing sound processing in one sound output device. Therefore, the configuration of one sound output device is simplified and the cost of one sound output device is reduced. Further, the information processing device determines the parameters used for acoustic processing of the input sound data based on at least one of first information regarding attributes of the input sound data and second information regarding the output of sound from the first sound output device. decide. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data is improved.
  • the information acquisition unit acquires information indicating acoustic characteristics of the one sound output device as the second information.
  • the acoustic characteristics of one sound output device are reflected in the sound processing parameters, so that the information processing device performs sound processing suitable for using output sound data with one sound output device. It can be carried out.
  • the information acquisition unit acquires information regarding sounds generated around the one sound output device as the second information.
  • the sound around the first sound output device is reflected in the sound processing parameters, so that the information processing device can use the output sound data in an environment where sounds are generated in the surroundings. Appropriate acoustic processing can be performed.
  • the information acquisition unit continuously acquires information regarding the sound from the one sound output device during transmission of the output sound data, and the parameter determination unit If the information regarding the sound changes, the parameters are redetermined.
  • the parameters are redetermined.
  • the information acquisition unit acquires information regarding the format of the input sound data as the first information, and can output it as the second information with the one sound output device.
  • the parameter determination unit determines, as the parameters, whether or not format conversion processing of the input sound data is necessary, and a conversion destination format if the format conversion processing is necessary. According to the above configuration, the user can use the input sound data on one sound output device without being aware of the format of the input sound data.
  • the information processing device further includes a reception unit that accepts a change in the parameter from a user using the first sound output device while the output sound data is being transmitted, and the information processing device The unit changes the parameters used for the audio processing to the parameters changed by the user, when the accepting unit accepts the change of the parameters.
  • the information processing device can perform acoustic processing reflecting the user's preference or the situation that is not reflected in the first information or the second information.
  • the one sound output device is an in-vehicle audio device that outputs the sound into the cabin of the vehicle.
  • the information processing device can improve the sound quality of the sound output into the cabin of the vehicle, which has a poor sound listening environment unlike inside a building.
  • the information acquisition unit acquires, as the second information, at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle.
  • the information processing device can reflect at least one of the operation state of the vehicle and the running state of the vehicle in the acoustic processing of input sound data.
  • the information processing device outputs from the one sound output device based on at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle.
  • the apparatus further includes a vehicle sound generation section that generates vehicle sound data representing a sound, and the data transmission control section transmits the vehicle sound data to the one sound output device via the network.
  • the information processing device can reduce the processing load on the control device provided in the vehicle, compared to the case where vehicle sound data is generated by the control device provided in the vehicle.
  • An information processing device is an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices.
  • the information processing device determines a sound processing device that performs sound processing on input sound data based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
  • the information processing device includes a data acquisition unit that acquires the input sound data when the device determination unit determines that the information processing device is the sound processing device; an output sound generation unit that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using parameters;
  • the apparatus further includes a data transmission control section for transmitting data to the sound output device.
  • the configuration of other devices can be simplified, and as a result, the configuration of other devices can be costs are reduced.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • the parameter determining unit determines the first parameter and the second parameter
  • the device determining unit determines that the information processing device executes the first process
  • the device determining unit determines that the information processing device executes the first process
  • the output sound generation unit performs the first processing on the previous input sound data using the first parameter to generate partially processed data.
  • the data transmission control unit transmits the partially processed data and the second parameter to the other device.
  • the device determining unit determines that another device other than the information processing device is the sound processing device
  • the information processing device transmits the parameters to the other device.
  • the apparatus further includes a parameter transmission control section for transmitting parameters. According to the above configuration, the other device determined as the sound processing device is provided with the parameters used for the sound processing, and the sound processing is appropriately executed.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • the parameter determining unit determines the first parameter and the second parameter
  • the device determining unit determines that a first sound processing device executes the first process
  • the parameter transmission control unit transmits the first parameter to the first sound processing device and sends the second parameter to the second sound processing device.
  • the processing load on each device is reduced.
  • the one sound output device is an electronic device that is carried and used by the user. According to the above configuration, the quality of sound output from the electronic device carried and used by the user is improved.
  • An information processing system includes a plurality of sound output devices and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing system includes a data acquisition unit that acquires input sound data, first information regarding attributes of the input sound data, and one sound output device among the plurality of sound output devices.
  • an information acquisition unit that acquires at least one of second information about the input sound data, and determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information.
  • a parameter determination section an output sound generation section that performs the acoustic processing on the input sound data using the parameters to generate output sound data to be used in the first sound output device; and a data transmission control section that transmits sound data to the one sound output device.
  • An information processing system includes a plurality of sound output devices and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices.
  • An information processing system comprising: information processing apparatus that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices. an acquisition unit; a parameter determination unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; and the first information and the second information.
  • a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the second information.
  • An information processing method is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, , obtain at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices, and acquire the first information and the second information.
  • the first sound output device determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the above, and performs the acoustic processing on the input sound data using the parameters. and transmits the output sound data to the one sound output device via the network.
  • An information processing method is an information processing method that is realized by a computer and performs processing on multiple output sound data used in each of multiple sound output devices, and obtains at least one of first information related to attributes of input sound data and second information related to one of the multiple sound output devices, determines parameters to be used in sound processing that imparts sound effects to the input sound data based on at least one of the first information and the second information, and determines a sound processing device that performs the sound processing on the input sound data based on at least one of the first information and the second information.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • a second parameter, the first parameter and the second parameter are determined, the first process is determined to be executed by the computer, and the second process is executed by another device other than the computer; determine the input sound data, perform the first processing on the input sound data using the first parameter to generate partially processed data, and send the partially processed data to the other device. and the second parameter.
  • Operating device 213... Sound data acquisition device, 214... Microphone, 215...Storage device, 216...Control device, 220...Amplifier, 230 (230A to 230F)...Speaker, 240...Sound control device, 251...Vehicle information transmitting section, 252...Setting reception section, 253...Second transmission control section, 254...Reception control unit, 255...Output control unit, C...Vehicle, Di...Input sound data, N...Network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

Ce dispositif de traitement d'informations génère une pluralité d'éléments de données sonores de sortie utilisée dans chacun d'une pluralité de dispositifs de sortie sonore. Une unité d'acquisition de données acquiert des données sonores d'entrée. Une unité d'acquisition d'informations acquiert de premières informations relatives à l'attribut des données sonores d'entrée et/ou de secondes informations relatives à la pluralité de dispositifs de sortie sonore. Une unité de détermination de paramètre détermine, sur la base des premières informations et/ou des secondes informations, un paramètre utilisé pour un traitement acoustique qui donne un effet acoustique aux données sonores d'entrée. Une unité de génération de son de sortie utilise le paramètre pour appliquer le traitement acoustique aux données sonores d'entrée, générant ainsi des données sonores de sortie qui doivent être utilisées dans un dispositif de sortie sonore. Une unité de commande de transmission de données transmet les données sonores de sortie au dispositif de sortie sonore par l'intermédiaire d'un réseau.
PCT/JP2023/026774 2022-09-21 2023-07-21 Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations WO2024062757A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-149912 2022-09-21
JP2022149912 2022-09-21

Publications (1)

Publication Number Publication Date
WO2024062757A1 true WO2024062757A1 (fr) 2024-03-28

Family

ID=90454419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/026774 WO2024062757A1 (fr) 2022-09-21 2023-07-21 Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations

Country Status (1)

Country Link
WO (1) WO2024062757A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000010576A (ja) * 1998-06-24 2000-01-14 Yamaha Motor Co Ltd エンジン模擬音発生装置
JP2014069656A (ja) * 2012-09-28 2014-04-21 Pioneer Electronic Corp 音響装置、出力音管理装置、端末装置及び出力音制御方法
JP2016072973A (ja) * 2014-09-24 2016-05-09 韓國電子通信研究院Electronics and Telecommunications Research Institute 動的フォーマット変換をサポートするオーディオメタデータ提供装置及びオーディオデータ再生装置、前記装置が行う方法、並びに前記動的フォーマット変換が記録されたコンピュータで読み出し可能な記録媒体
JP2020109968A (ja) * 2019-01-04 2020-07-16 ハーマン インターナショナル インダストリーズ インコーポレイテッド ユーザ固有音声情報及びハードウェア固有音声情報に基づくカスタマイズされた音声処理

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000010576A (ja) * 1998-06-24 2000-01-14 Yamaha Motor Co Ltd エンジン模擬音発生装置
JP2014069656A (ja) * 2012-09-28 2014-04-21 Pioneer Electronic Corp 音響装置、出力音管理装置、端末装置及び出力音制御方法
JP2016072973A (ja) * 2014-09-24 2016-05-09 韓國電子通信研究院Electronics and Telecommunications Research Institute 動的フォーマット変換をサポートするオーディオメタデータ提供装置及びオーディオデータ再生装置、前記装置が行う方法、並びに前記動的フォーマット変換が記録されたコンピュータで読み出し可能な記録媒体
JP2020109968A (ja) * 2019-01-04 2020-07-16 ハーマン インターナショナル インダストリーズ インコーポレイテッド ユーザ固有音声情報及びハードウェア固有音声情報に基づくカスタマイズされた音声処理

Similar Documents

Publication Publication Date Title
US10250960B2 (en) Sound reproduction device including auditory scenario simulation
CN109147815B (zh) 用于车辆中的选择性音量调节的系统和方法
US10142758B2 (en) System for and a method of generating sound
KR100921584B1 (ko) 탑재식 음악 재생 장치 및 음악 정보 분배 시스템
US9683884B2 (en) Selective audio/sound aspects
US8019454B2 (en) Audio processing system
US20120051561A1 (en) Audio/sound information system and method
US20070171788A1 (en) Audio data reproducing method and program therefor
TW200922272A (en) Automobile noise suppression system and method thereof
CN102640522A (zh) 音频数据处理装置、音频装置、音频数据处理方法、程序以及记录该程序的记录介质
CN113421564A (zh) 语音交互方法、语音交互系统、服务器和存储介质
WO2024062757A1 (fr) Dispositif de traitement d'informations, système de traitement d'informations et procédé de traitement d'informations
KR101500177B1 (ko) 외부 음원 연동이 가능한 차량의 오디오 시스템 및 이를 이용한 외부 음원의 음향 출력 방법
WO2021175735A1 (fr) Dispositif électronique, procédé et programme informatique
JP2019080188A (ja) オーディオシステム及び車両
JP2007110481A (ja) 音声制御装置、補正装置
JP2008071058A (ja) 音声再生装置、音声再生方法、プログラム
CN113270082A (zh) 一种车载ktv控制方法及装置、以及车载智能网联终端
JP2006293697A5 (fr)
CN115278484A (zh) 音频流的控制方法、装置、设备及介质
US20200081681A1 (en) Mulitple master music playback
US7873424B1 (en) System and method for optimizing digital audio playback
WO2022121617A1 (fr) Procédé de karaoké, terminal monté sur véhicule et véhicule
US11902767B2 (en) Combining prerecorded and live performances in a vehicle
JP7423156B2 (ja) 音声処理装置および音声処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23867883

Country of ref document: EP

Kind code of ref document: A1