WO2024062757A1 - Information processing device, information processing system and information processing method - Google Patents

Information processing device, information processing system and information processing method Download PDF

Info

Publication number
WO2024062757A1
WO2024062757A1 PCT/JP2023/026774 JP2023026774W WO2024062757A1 WO 2024062757 A1 WO2024062757 A1 WO 2024062757A1 JP 2023026774 W JP2023026774 W JP 2023026774W WO 2024062757 A1 WO2024062757 A1 WO 2024062757A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sound
sound data
vehicle
processing
Prior art date
Application number
PCT/JP2023/026774
Other languages
French (fr)
Japanese (ja)
Inventor
正寛 中西
威 岡見
信晃 姫野
宏親 前垣
誠治 平出
Original Assignee
ヤマハ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ヤマハ株式会社 filed Critical ヤマハ株式会社
Publication of WO2024062757A1 publication Critical patent/WO2024062757A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10KSOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
    • G10K15/00Acoustics not otherwise provided for
    • G10K15/04Sound-producing devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • the present disclosure relates to a technology for processing sound data used in a sound output device.
  • a sound output device placed near a user accesses a user profile placed on a cloud, and determines processing parameters for sound data based on the user profile.
  • the sound output device outputs to the user sound based on the sound data processed using the processing parameters.
  • processing of sound data is performed by a sound output device. Therefore, it is necessary to equip the sound output device with a high-performance control device that can process sound data, which poses a problem in that the cost of the sound output device increases.
  • One aspect of the present disclosure aims to reduce the cost of a sound output device.
  • an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and which generates input sound data.
  • an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; an output sound generation unit that generates output sound data to be used in the first sound output device by performing acoustic processing; and a data transmission control unit that transmits the output sound data to the first sound output device via a network.
  • an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and includes first information regarding attributes of input sound data. and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices, and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter used in acoustic processing that imparts an acoustic effect to data; and a sound processing device that performs the acoustic processing on the input sound data based on at least one of the first information and the second information. and a device determining unit that determines the.
  • an information processing system includes: a plurality of sound output devices; and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing device includes a data acquisition unit that acquires input sound data, first information regarding an attribute of the input sound data, and first information regarding one of the plurality of sound output devices.
  • an information acquisition unit that acquires at least one of the second information; and a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information.
  • an output sound generation section that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using the parameters; and a data transmission control section that transmits the data to the first sound output device.
  • an information processing system includes: a plurality of sound output devices; and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing device includes an information acquisition unit that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices.
  • a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of the first information and the second information; and a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the information.
  • an information processing method is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, and the information processing method acquires input sound data. and acquiring at least one of first information regarding attributes of the input sound data and second information regarding one of the plurality of sound output devices, and acquiring at least one of the first information and the second information. determining parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on one of the above, and applying the acoustic processing to the input sound data using the parameters; output sound data is generated, and the output sound data is transmitted to the one sound output device via the network.
  • an information processing method is an information processing method that is realized by a computer and performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, the method comprising: At least one of first information regarding attributes and second information regarding one of the plurality of sound output devices is acquired, and based on at least one of the first information and the second information, the input determining parameters to be used in acoustic processing for imparting acoustic effects to the sound data, and determining a sound processing device for performing the acoustic processing on the input sound data based on at least one of the first information and the second information; .
  • FIG. 1 is a diagram illustrating a configuration of an information processing system 1 according to a first embodiment.
  • FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do.
  • FIG. 3 is a diagram schematically showing an example of a data flow between a distribution server 30 and an in-vehicle audio device 20A.
  • FIG. 7 is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • 1 is a block diagram showing the configuration of an acoustic server 10.
  • FIG. FIG. 2 is a block diagram showing the configuration of an in-vehicle audio device 20A.
  • 3 is a diagram illustrating the arrangement of speakers 230 in vehicle C.
  • FIG. 5 is a flowchart showing the operation of the control device 103 of the audio server 10.
  • FIG. 11 is a block diagram showing a configuration of a sound server 10 according to a second embodiment.
  • FIG. 3 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. It is a block diagram showing the composition of 20A of vehicle-mounted sound devices in a 3rd embodiment.
  • FIG. 3 is a diagram illustrating the configuration of an information processing system 2 according to a fourth embodiment. It is a block diagram showing the composition of audio server 10 in a 4th embodiment. It is a flowchart which shows operation of control device 103 of audio server 10 in a 4th embodiment.
  • FIG. 1 is a diagram illustrating the configuration of an information processing system 1 according to a first embodiment.
  • the information processing system 1 includes an audio server 10 and a plurality of in-vehicle audio devices 20 (20A to 20N).
  • the audio server 10 is an example of an information processing device and a computer, and the in-vehicle audio devices 20A to 20N are an example of a plurality of sound output devices.
  • the audio server 10 and each of the plurality of in-vehicle audio devices 20A to 20N are connected to a network N.
  • the network N may be a wide area network such as the Internet, or may be a local area network (LAN) of a facility or the like.
  • LAN local area network
  • the in-vehicle audio devices 20A to 20N are each mounted on a vehicle C (see FIG. 5) such as an automobile, and output sound into the cabin of the vehicle C from a speaker 230 (see FIG. 5).
  • Each of the plurality of in-vehicle audio devices 20A to 20N is mounted on a plurality of different vehicles C.
  • the functions will be explained focusing on one of the plurality of vehicle-mounted sound devices 20A to 20N, but the other vehicle-mounted sound devices 20B to 20N also have the same functions as the vehicle-mounted sound device 20A.
  • the vehicle-mounted audio device 20A is an example of one sound output device among the plurality of vehicle-mounted audio devices 20A to 20N.
  • the sounds output from the in-vehicle audio devices 20A to 20N are, for example, sounds such as songs or radio broadcasts, guidance sounds from the navigation device 52, or warning sounds from the safety system of the vehicle C.
  • the audio server 10 generates a plurality of output sound data Do (see FIG. 2) used in each of the plurality of in-vehicle audio devices 20A to 20N.
  • FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do.
  • one vehicle-mounted audio device 20A among the plurality of vehicle-mounted audio devices 20A to 20N is taken as an example.
  • the audio server 10 acquires the sound data of the sound output by the in-vehicle audio device 20A as input sound data Di.
  • the input sound data Di includes at least one of local sound data Dsl transmitted from the in-vehicle audio devices 20A to 20N and distributed sound data Dsn distributed from the distribution server 30.
  • the distribution server 30 is a server that distributes sound data via the network N.
  • the acoustic server 10 generates output sound data Do by performing acoustic processing to add acoustic effects to the input sound data Di, and transmits the output sound data Do to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A that has received the output sound data Do outputs sound based on the output sound data Do from the speaker 230.
  • the audio server 10 similarly transmits the output sound data Do to the other vehicle-mounted audio devices 20B to 20N.
  • FIG. 4 is a block diagram showing the configuration of the audio server 10.
  • the acoustic server 10 includes a communication device 101, a storage device 102, and a control device 103.
  • the communication device 101 communicates with other devices using wireless communication or wired communication.
  • the communication device 101 includes a communication interface connectable to the network N using wired communication, and communicates with the in-vehicle audio devices 20A to 20N via the network N. Furthermore, the communication device 101 communicates with the distribution server 30 via the network N.
  • the storage device 102 stores a program PG1 executed by the control device 103.
  • the storage device 102 also stores map data MP, vehicle-specific acoustic characteristic information DB, and user setting data US.
  • the map data MP includes at least one of information such as the topography of each region, the shape of the road, the number of lanes, the type of facilities (including forests, etc.) around the road, and predicted traffic volume by time of day.
  • the map data MP is not limited to being stored in the storage device 102, but may be acquired via the network N from a map data server (not shown) that distributes the map data MP, for example. Details of the vehicle-specific acoustic characteristic information DB and the user setting data US will be described later.
  • the storage device 102 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium).
  • Storage device 215 includes nonvolatile memory and volatile memory.
  • Nonvolatile memories include, for example, ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Memory). programmable Read Only Memory).
  • the volatile memory is, for example, RAM (Random Access Memory).
  • the storage device 102 is a portable recording medium that can be attached to and detached from the audio server 10, or a recording medium that can be written to or read from by the control device 103 via the network N (for example, cloud storage). Good too.
  • the control device 103 is composed of one or more processors that control each element of the audio server 10.
  • the control device 103 includes a CPU (Central Processing Unit), an SPU (Sound Processing Unit), a DSP (Digital Signal Processor), and an FPGA (Field Programming Unit). rammable Gate Array) or ASIC (Application Specific Integrated Circuit), etc. Consists of a processor.
  • the control device 103 controls the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and It functions as a change reception unit 116. Details of the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and the change reception section 116 will be described later.
  • FIG. 5 is a block diagram showing the configuration of the in-vehicle audio device 20A.
  • the vehicle-mounted audio device 20A will be described as an example, but the vehicle-mounted audio devices 20B to 20N have a similar configuration.
  • the in-vehicle audio device 20 is mounted on the vehicle C.
  • Vehicle-mounted audio device 20 includes a head unit 200, an amplifier 220, and a speaker 230.
  • the head unit 200 is provided in the instrument panel of the vehicle C, for example.
  • the head unit 200 includes a communication device 211 , an operating device 212 , a sound data acquisition device 213 , a microphone 214 , a storage device 215 , and a control device 216 .
  • the communication device 211 includes a communication interface for wide area communication network connection that can be connected to the network N using wireless communication, and communicates with the acoustic server 10 via the network N.
  • the communication device 211 receives output sound data Do from the audio server 10.
  • Communication device 211 is an example of a receiving device.
  • the operating device 212 receives operations performed by the user of the vehicle C.
  • the user of vehicle C is, for example, a passenger of vehicle C.
  • the operating device 212 is a touch panel.
  • the operation device 212 is not limited to a touch panel, but may be an operation panel having various operation buttons.
  • the sound data acquisition device 213 acquires sound data of the sound output by the in-vehicle audio device 20.
  • the sound data acquisition device 213 may be a reading device that reads sound data stored in a recording medium such as a CD (Compact Disc) or an SD card.
  • the sound data acquisition device 213 may be a radio broadcast or television broadcast receiving device.
  • the sound data acquisition device 213 may be a communication device that can be connected to a nearby electronic device (for example, a smartphone, a portable music player, etc.) using, for example, wireless communication or wired communication.
  • the sound data acquisition device 213 includes a communication interface for short-range communication (for example, Bluetooth (registered trademark), USB (Universal Serial Bus), etc.), and communicates with devices located nearby.
  • short-range communication for example, Bluetooth (registered trademark), USB (Universal Serial Bus), etc.
  • the sound data acquired by the sound data acquisition device 213 is hereinafter referred to as "acquired sound data Dsy.”
  • the microphone 214 picks up the sound inside the cabin of the vehicle C and generates sound data of the collected sound (hereinafter referred to as "picked-up data").
  • the sound data generated by the microphone 214 is output to the control device 216 of the head unit 200.
  • the microphones 214 are not limited to being provided in the head unit 200, but may be provided in multiple locations in the vehicle interior, or may be provided outside the vehicle. Additionally, the microphone 214 may be externally connected to the head unit 200.
  • the storage device 215 stores a program PG2 executed by the control device 216.
  • the storage device 215 may also store sound data.
  • the sound data stored in the storage device 215 may be, for example, sound data indicating a song or the like, or may be system sound output when the head unit 200 is operated.
  • the sound data stored in the storage device 215 will be referred to as "stored sound data Dsm" hereinafter.
  • the storage device 215 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium).
  • Storage device 215 includes nonvolatile memory and volatile memory.
  • Non-volatile memories are, for example, ROM, EPROM and EEPROM.
  • Volatile memory is, for example, RAM.
  • the storage device 215 is a portable recording medium that can be attached to and removed from the in-vehicle audio device 20, or a recording medium that can be written to or read from by the control device 216 via the network N (for example, cloud storage). It's okay.
  • the control device 216 is composed of one or more processors that control each element of the in-vehicle audio device 20.
  • the control device 216 is configured with one or more types of processors such as a CPU, SPU, DSP, FPGA, or ASIC.
  • the control device 216 is connected to a vehicle ECU (Electronic Control Unit) 50, a navigation device 52, and a camera 54.
  • Vehicle ECU 50 controls the operation of vehicle C. More specifically, the vehicle ECU 50 controls the drive mechanisms and brakes of the vehicle C, such as the engine or motor, based on the operating states of the operation mechanisms of the vehicle C, such as a steering wheel, shift lever, accelerator pedal, and brake pedal (not shown). etc. to control the braking mechanism.
  • Vehicle ECU 50 outputs system sound data Dss of vehicle C to control device 216. For example, when the shift lever is operated in reverse (R), the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is moving backward as the system sound data Dss. Further, for example, when the traveling speed of the vehicle C exceeds the speed limit, the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is overspeeding as the system sound data Dss.
  • the navigation device 52 searches for a route to a destination point set by the user and provides route guidance to the destination point. For example, the navigation device 52 displays a map around the current location of the vehicle C on its own display, and displays a mark indicating the current location of the vehicle C superimposed on the map. Furthermore, the navigation device 52 outputs a guidance voice that instructs the user about the direction of travel on the route to reach the destination point. Furthermore, the navigation device 52 may output a guidance voice indicating caution regarding traffic regulations, such as the speed limit of the road on which the vehicle C is traveling. The guidance voice of the navigation device 52 is output from the speaker 230. The navigation device 52 outputs guidance audio data Dsa, which is audio data corresponding to the guidance audio, to the control device 216. Further, the navigation device 52 may output position information of the vehicle C generated by a GPS (Global Positioning System) device, not shown, to the control device 216.
  • GPS Global Positioning System
  • the camera 54 captures an image of the interior of the vehicle C and generates image data. Image data generated by camera 54 is output to control device 216.
  • the camera 54 may capture not only images inside the vehicle interior but also images outside the vehicle.
  • the camera 54 may also serve as, for example, a drive recorder mounted on the vehicle C or an imaging device for a safety system of the vehicle C.
  • the control device 216 functions as a vehicle information transmitting section 251, a setting receiving section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255 by executing the program PG2. Details of the vehicle information transmitting section 251, setting receiving section 252, second transmission controlling section 253, receiving controlling section 254, and output controlling section 255 will be described later.
  • the amplifier 220 amplifies the sound data and supplies the amplified sound data to the speaker 230.
  • output sound data Do output from the control device 216 is input to the amplifier 220.
  • the speaker 230 outputs sound based on the output sound data Do.
  • a plurality of speakers 230 constitute a speaker set.
  • the arrangement of the plurality of speakers 230 differs depending on the vehicle C depending on the type of vehicle C or customization by the user.
  • the speaker 230 may be a single speaker.
  • FIG. 6 is a diagram illustrating the arrangement of the speakers 230 in the vehicle C.
  • Vehicle C includes seats P1 to P4.
  • Seat P1 and seat P2 are seats provided at the front of the cabin of vehicle C.
  • Seat P1 is the driver's seat
  • seat P2 is the passenger's seat.
  • Seat P3 and seat P4 are seats provided at the rear of the cabin of vehicle C.
  • Seat P3 is a seat located behind seat P1, which is a driver's seat
  • seat P4 is a seat located behind seat P2, which is a passenger seat.
  • the vehicle C also includes doors D1 to D4.
  • the door D1 is a door through which a passenger seated in the seat P1 gets on and off the vehicle. Note that the passenger is an example of a user.
  • Door D2 is a door through which a passenger seated in seat P2 gets on and off.
  • Door D3 is a door for a passenger seated in seat P3 to get on and off.
  • Door D4 is a door through which a passenger seated in seat P4 gets on and off.
  • Speakers 230A and 230B are provided on door D1. Speakers 230C and 230D are provided on door D2. Speaker 230E is provided on door D3. Speaker 230F is provided on door D4. In other words, speakers 230A and 230B are provided at locations corresponding to seat P1. Speakers 230C and 230D are provided at locations corresponding to seat P2. Speaker 230E is provided at a location corresponding to seat P3. Speaker 230F is provided at a location corresponding to seat P4.
  • the sound output from the speaker 230 includes, for example, acquired sound data Dsy acquired by the sound data acquisition device 213, stored sound data Dsm stored in the storage device 215, and system sound data output from the vehicle ECU 50. It includes a sound based on at least one of the guidance audio data Dss and the guidance audio data Dsa output from the navigation device 52.
  • These acquired sound data Dsy, stored sound data Dsm, system sound data Dss, or guidance sound data Dsa are sound data stored in the in-vehicle audio device 20A, and sound data output by a device connected to the in-vehicle audio device 20A.
  • the acquired sound data Dsy, stored sound data Dsm, system sound data Dss, and guidance sound data Dsa are hereinafter referred to as "local sound data Dsl.”
  • FIG. 3A is a diagram schematically showing an example of a data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • Distribution server 30 distributes sound data via network N.
  • the sound data distributed by the distribution server 30 is hereinafter referred to as "distributed sound data Dsn.”
  • the distribution server 30 distributes, via the network N, distribution sound data Dsn indicating, for example, the sounds of songs, environmental sounds, talk programs, news programs, or language learning materials.
  • the distribution server 30 is not limited to distributing sound data, but may also distribute video data including sound data.
  • the distribution server 30 is provided, for example, by an operator of a distribution service that distributes audio data (including video data). Although one distribution server 30 is illustrated in FIG. 3A, a plurality of distribution servers 30 may be provided. For example, a plurality of sound data distribution companies may each provide distribution servers 30.
  • the user When receiving the distribution sound data Dsn from the distribution server 30, the user selects the desired distribution sound data Dsn from among the plurality of distribution sound data Dsn distributed by the distribution server 30. More specifically, the in-vehicle audio device 20A (setting reception unit 252 (see FIG. 5) described later) obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and displays the list on the operating device 212 (touch panel). do. The user selects desired distribution sound data Dsn from the list displayed on the operating device 212.
  • the user does not specifically select the distributed sound data Dsn, but rather selects the attributes of the distributed sound data Dsn (the name of the creator of the sound data such as the artist name, the genre of the sound data, the situation suitable for the sound data, etc.). You may choose.
  • the in-vehicle audio device 20A (setting reception unit 252) sends information M (for example, song name, attribute, etc.) specifying the distributed sound data Dsn to the audio server 10 via the communication device 211. (S11).
  • the information M may include information specifying the format of sound data that can be reproduced by the in-vehicle audio device 20A.
  • the sound server 10 transmits information M to the distribution server 30 (S12). Based on information M, the distribution server 30 identifies the distribution sound data Dsn requested by the user from among the multiple distribution sound data Dsn. The distribution server 30 transmits the identified distribution sound data Dsn to the sound server 10 (S13). The sound server 10 transmits the distribution sound data Dsn to the in-vehicle sound device 20A (S14). At this time, the sound server 10 performs acoustic processing on the distribution sound data Dsn before transmitting it to the in-vehicle sound device 20A. That is, the sound server 10 acquires the distribution sound data Dsn as input sound data Di, and transmits the acoustically processed distribution sound data Dsn to the in-vehicle sound device 20A as output sound data Do.
  • FIG. 3B is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A.
  • the distribution server 30 and the in-vehicle audio device 20A are directly connected.
  • the in-vehicle audio device 20A obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and receives selection of desired distributed sound data Dsn from the user.
  • the in-vehicle audio device 20A transmits information M specifying the distribution sound data Dsn to the distribution server 30 via the communication device 211 (S21).
  • the distribution server 30 specifies the distribution sound data Dsn requested by the user based on the information M, and transmits the specified distribution sound data Dsn to the in-vehicle audio device 20A (S22).
  • the in-vehicle audio device 20A transmits the distributed sound data Dsn to the audio server 10 (S23).
  • This distributed sound data Dsn becomes input sound data Di.
  • the acoustic server 10 performs acoustic processing on the distributed sound data Dsn, and transmits it to the vehicle-mounted audio device 20A as output sound data Do (S24).
  • steps S21 and S22 may be executed by the user's smartphone instead of the in-vehicle audio device 20A.
  • a smartphone By using a smartphone, even a passenger riding in the rear seat P3 or seat P4 of the vehicle C can easily select the desired distributed sound data Dsn.
  • A-3 Functional configuration
  • A-3-1 Acoustic server 10
  • the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, a first transmission control section 115, and a change reception section 116. functions as In the following description, it is assumed that one in-vehicle audio device 20 to which the audio server 10 provides the output sound data Do is the in-vehicle audio device 20A.
  • the first acquisition unit 111 acquires input sound data Di.
  • the first acquisition unit 111 is an example of a data acquisition unit.
  • the input sound data Di is sound data corresponding to the sound output from the in-vehicle audio device 20A.
  • the first acquisition unit 111 acquires the input sound data Di using the following two methods.
  • [1] Acquire input sound data Di from the in-vehicle sound device 20A
  • the first acquisition unit 111 acquires the acquired sound data Dsy or the stored sound data Dsm from the in-vehicle sound device 20A via the network N.
  • the first acquisition unit 111 acquires the system sound data Dss or the guidance voice data Dsa from the in-vehicle sound device 20A via the network N.
  • the first acquisition unit 111 acquires both or either of the sound data stored in the in-vehicle sound device 20A and the sound data output by the device connected to the in-vehicle sound device 20A as the input sound data Di. In other words, the first acquisition unit 111 acquires the local sound data Dsl as the input sound data Di.
  • [2] Obtain input sound data Di from distribution server 30 If the user of the in-vehicle audio device 20A desires to use the distribution sound data Dsn, the first acquisition unit 111 distributes the distribution sound data Dsn via the network N. Input sound data Di is acquired from the distribution server 30. The method for acquiring the distributed sound data Dsn is as described using FIGS. 3A and 3B.
  • the second acquisition unit 112 acquires both or one of first information regarding the attribute of the input sound data Di and second information regarding one of the vehicle-mounted audio devices 20A among the plurality of vehicle-mounted audio devices 20.
  • the second acquisition unit 112 is an example of an information acquisition unit. Details of ⁇ 1> first information and ⁇ 2> second information will be described below.
  • the first information is information regarding attributes of the input sound data Di.
  • the attributes of the input sound data Di include, for example, ⁇ 1-1> information regarding the format of the input sound data Di and/or ⁇ 1-2> information regarding the content of the sound.
  • Information regarding the content of the sound indicates, for example, at least one of the song title, artist name, or music genre of the input sound data Di.
  • Information on the format of the input sound data Di is information that specifies the format of the input sound data Di.
  • MP3 MPEG-1 Audio Layer-3; lossy compression
  • AAC Advanced Audio Coding; lossy compression
  • FLAC Free Lossless Audio Codec; lossless compression
  • WAV-PCM Wideform formatting of uncompressed PCM data
  • main sound data formats For example, the format of the stored sound data Dsm and the guidance voice data Dsa may be different.
  • the format of the distribution sound data Dsn distributed by the distribution server 30 may differ for each distribution service of sound data.
  • the second acquisition unit 112 acquires information on the format of the input sound data Di as the first information. Specifically, the second acquisition unit 112 determines the format of the input sound data Di based on the extension of the input sound data Di acquired by the first acquisition unit 111, for example.
  • Information regarding the sound content of the input sound data Di such as the song title, artist name, music genre, etc.
  • Information such as the song title, artist name, music genre, etc. of the input sound data Di is, for example, if the input sound data Di is the music data. In this case, it is added as metadata to the input sound data Di.
  • the second acquisition unit 112 acquires information regarding the sound content of the input sound data Di, such as the song title, artist name, or music genre, based on the metadata of the input sound data Di acquired by the first acquisition unit 111, for example. .
  • the second information is information about the in-vehicle acoustic device 20A.
  • the information about the in-vehicle acoustic device 20A includes, for example, at least one of ⁇ 2-1> information indicating the acoustic characteristics of the in-vehicle acoustic device 20A (hereinafter referred to as "acoustic characteristic information") and ⁇ 2-2> information about the environment in which the in-vehicle acoustic device 20A is placed (hereinafter referred to as "environmental information").
  • the second acquisition unit 112 acquires acoustic characteristic information of the vehicle-mounted audio device 20A as second information.
  • the acoustic characteristic information of the vehicle-mounted audio device 20A is information indicating what kind of sound is heard by the user when the vehicle-mounted audio device 20A outputs sound based on predetermined sound data.
  • the acoustic characteristic information of the vehicle-mounted audio device 20A includes information regarding the performance of the vehicle-mounted audio device 20A, and information regarding the space (vehicle space) until the sound output from the vehicle-mounted audio device 20A (speaker 230) is heard by the user. including.
  • the second acquisition unit 112 measures, for example, the acoustic characteristics of the in-vehicle audio device 20A. More specifically, the second acquisition unit 112 transmits test sound data to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A outputs a sound corresponding to the test sound data (hereinafter referred to as "test sound") from the speaker 230.
  • test sound a sound corresponding to the test sound data
  • test sounds are collected using external microphones placed at the positions of seats P1 to P4, and the collected sound data is output to the in-vehicle audio device 20A.
  • the test sound may be collected by the microphone 214 of the head unit 200 instead of using an external microphone.
  • the in-vehicle audio device 20A transmits collected sound data to the audio server 10.
  • the second acquisition unit 112 estimates the acoustic characteristics of the in-vehicle audio device 20A by acquiring the collected sound data and analyzing the collected sound data. Note that the in-vehicle audio device 20A may estimate the acoustic characteristics based on the collected sound data.
  • the measurement of the acoustic characteristics may be performed in advance, for example, prior to using the in-vehicle audio device 20A.
  • the test sound is output and the collected sound data is transmitted.
  • the second acquisition unit 112 analyzes the collected sound data and estimates the acoustic characteristics of the in-vehicle audio device 20A.
  • the second acquisition unit 112 records the estimated acoustic characteristics as acoustic characteristic information in the vehicle-specific acoustic characteristic information DB (see FIG. 4).
  • the second acquisition unit 112 stores the acoustic characteristic information in the vehicle-specific acoustic characteristic information DB in association with the identification information for identifying the in-vehicle audio device 20A. That is, the vehicle-specific acoustic characteristic information DB includes information in which the identification information of the vehicle-mounted audio device 20A is associated with the acoustic characteristic information of the vehicle-mounted audio device 20A. The vehicle-specific acoustic characteristic information DB also stores acoustic characteristic information of other vehicle-mounted acoustic devices 20, such as the vehicle-mounted acoustic device 20B.
  • the second acquisition unit 112 searches the vehicle-specific acoustic characteristic information DB using the identification information of the in-vehicle acoustic device 20A as a key. Acoustic characteristic information of 20A can be acquired.
  • the measurement of the acoustic characteristics may be performed, for example, every time the in-vehicle audio device 20A is used. In this case, for example, the test sound is output and collected every time before the vehicle C starts traveling.
  • By measuring the acoustic characteristics each time the vehicle-mounted audio device 20A is used it is possible to estimate the acoustic characteristics by reflecting the in-vehicle environment each time the vehicle-mounted audio device 20A is used. For example, the number of occupants and the riding positions in vehicle C may vary from time to time.
  • the influence of sound absorption and reflection by the occupant's body can be reflected in the acoustic characteristics.
  • the second acquisition unit 112 may acquire at least one of information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C as acoustic characteristic information of the in-vehicle acoustic device 20A, rather than measuring the acoustic characteristics using a test sound.
  • the parameter determination unit 113 which will be described later, can estimate the acoustic characteristics of the in-vehicle acoustic device 20 by performing a simulation using the information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C.
  • the information regarding the performance of the in-vehicle audio device 20A includes, for example, the product numbers (model numbers) of the head unit 200, the speaker 230, and the amplifier 220. Further, the information regarding the specifications of the vehicle C is, for example, the vehicle type (including model number, grade, etc.) or cabin layout of the vehicle C in which the in-vehicle audio device 20A is mounted. Generally, by specifying the vehicle type (including model number, grade, etc.) of the vehicle C, it is possible to specify the interior layout and the material of the seat P arranged in the vehicle interior. On the other hand, if the user has retrofitted the speaker 230, for example, it is preferable to obtain information that can specify the actual cabin layout.
  • the cabin layout is information such as the dimensions of the cabin, the positions of the seats P1 to P4, and the positions of the speakers 230, for example. Further, the information regarding the occupants of the vehicle C includes information such as the number of occupants, the riding position (seated seat P), and the physique of the occupants.
  • the second acquisition unit 112 acquires, as the second information, environmental information related to the environment in which the vehicle-mounted sound device 20A is placed.
  • the environment in which the vehicle-mounted sound device 20A is placed is, for example, the vehicle C.
  • the environmental information includes, for example, information indicating the operation state of the vehicle C and vehicle information such as detection information of the sensor 74 mounted on the vehicle C.
  • At least one of the above-mentioned vehicle interior layout (information such as the dimensions of the vehicle interior, the positions of the seats P1 to P4, the position of the speaker 56, etc.) or information related to the occupants of the vehicle C (information such as the number of occupants, their riding positions (seats P in which they are seated), and the physiques of the occupants) may be acquired as the environmental information.
  • the environmental information includes, for example, information regarding sounds generated around the in-vehicle audio device 20A (hereinafter referred to as "ambient sounds").
  • the ambient sound is the sound generated around the vehicle-mounted audio device 20A, for example, the sound generated inside or outside the vehicle C.
  • the sounds generated inside the vehicle C include, for example, the sounds of conversations between passengers, the sounds of conversations between passengers using smartphones, etc., the sounds of devices used by passengers using electronic devices such as smartphones, and the like.
  • Sounds generated outside the vehicle C include, for example, the running noise generated by the vehicle C running, the running noise of other vehicles around the vehicle C, and the environmental sounds around the vehicle C (rain noise caused by rain, wind, etc.). sound, guidance sound of pedestrian signals, etc.).
  • the second acquisition unit 112 acquires, for example, sound data collected by the microphone 214 of the in-vehicle audio device 20A (hereinafter referred to as "sound data") as the environmental information. Further, the second acquisition unit 112 may acquire, for example, at least one of the traveling position and traveling speed of the vehicle C, and an image of the outside of the vehicle captured by the camera 54, as the environmental information. This information is used to estimate ambient sound.
  • sound data collected by the microphone 214 of the in-vehicle audio device 20A
  • the second acquisition unit 112 may acquire, for example, at least one of the traveling position and traveling speed of the vehicle C, and an image of the outside of the vehicle captured by the camera 54, as the environmental information. This information is used to estimate ambient sound.
  • the second acquisition unit 112 may acquire the environmental information from the in-vehicle audio device 20A, or may acquire the environmental information from a vehicle management server (not shown) that manages the vehicle C via the network N.
  • the vehicle management server receives information indicating the driving state of the vehicle C, information indicating the operation state of the vehicle C, and detection information of the sensor mounted on the vehicle C from a plurality of vehicles C traveling on the road via the network N. etc. to obtain.
  • the vehicle management server generates control data for controlling automatic driving in vehicle C, for example, using this information. Further, the vehicle management server may use this information to estimate, for example, the road congestion situation and distribute the congestion situation via the network N.
  • the ambient sound of vehicle C changes every moment. Therefore, the second acquisition unit 112 continuously acquires environmental information from the in-vehicle audio device 20A while transmitting the output sound data Do.
  • a parameter determining unit 113 which will be described later, re-determines the parameters of the sound processing when the environmental information changes.
  • the parameter determining unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information. Generally, when the acoustic processing to be performed on the input sound data Di is determined, one or more "types of parameters" to be determined in order to perform the acoustic processing are specified.
  • the "parameter” determined by the parameter determining unit 113 means a specific "parameter value” corresponding to one or more "parameter types.” For example, when the "parameter type" is the volume, the "parameter value” is a volume value that specifies the volume of the volume (hereinafter sometimes simply referred to as "volume").
  • the acoustic processing that the acoustic server 10 performs on the input sound data Di includes [A] acoustic adjustment processing, [B] environment adaptation processing, [C] volume adjustment processing, and [D] format conversion processing. Contains at least one of them.
  • the sound adjustment process is a process for improving the sound quality of the sound output from the in-vehicle audio device 20.
  • the sound adjustment process is, for example, various processes that are originally executed by a DSP for in-vehicle audio.
  • the space inside the vehicle C is limited, and the distances between the user and each speaker 230 are different.
  • sound is likely to be reflected by the window glass and sound is absorbed by the seat P, resulting in a situation where sound quality is likely to deteriorate.
  • the sound adjustment process is a process of adjusting the sound output from the in-vehicle audio device 20 so as to optimize the listening by the occupant seated in the seat P.
  • Time alignment is a process of changing the timing at which sound is output from each speaker 230 to focus the sound on the occupant of vehicle C (mainly the user sitting in the driver's seat).
  • the type of parameter is, for example, the output timing of sound in each speaker 230 (for example, the amount of delay of other speakers 230 with respect to the reference speaker 230).
  • An equalizer is a process that adjusts the sound balance by increasing or decreasing the gain (amplification of the input signal) for each frequency band.
  • the type of parameter is, for example, gain in each frequency band.
  • Crossover is a process of adjusting the output frequency band allocated to each speaker 230.
  • the type of parameter is, for example, a frequency band to be allocated to each speaker 230.
  • the parameter determination unit 113 determines parameters using the acoustic characteristic information of the in-vehicle audio device 20A acquired by the second acquisition unit 112.
  • the parameter determination unit 113 can directly determine the parameters from the acoustic characteristic information.
  • the acoustic characteristic information is at least one of information regarding the performance of the in-vehicle audio device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C
  • the parameter determining unit 113 performs simulation based on these information. are performed to estimate the acoustic characteristics of the in-vehicle audio device 20 and determine parameters.
  • parameters may be determined based on information such as the song title, artist name, music genre, etc. of the input sound data Di, which is the first information.
  • the parameter determining unit 113 determines equalizer processing parameters based on, for example, the music genre of the input sound data Di. For example, if the music genre is rock, the parameter determination unit 113 relatively increases the volume of the high range corresponding to the electric guitar sound and the volume of the low range corresponding to the kick and bass sounds. Further, when the music genre is pop music, the parameter determining unit 113 relatively increases the volume of the midrange corresponding to the vocal sound. That is, when performing equalizer processing, the type of parameter is, for example, relative volume for each frequency band.
  • the environment adaptation process is a process of adjusting the volume of sound data based on the ambient sound of the in-vehicle audio device 20A. For example, if construction is being carried out around vehicle C and noise is generated, or if vehicle C is running at a high speed and the running noise is loud, the parameter determination unit 113 increases the volume of the sound output from the speaker 230. do. Further, for example, when the passengers are talking with each other inside the vehicle, the parameter determining unit 113 may reduce the sound output from the speaker 230. Further, the parameter determination unit 113 may change the frequency of the sound output from the speaker 230 in accordance with the height (frequency) of the surrounding sound. That is, when executing the environment adaptation process, the type of parameter is, for example, the volume of the sound output from the speaker 230 or the frequency band of the sound output from the speaker 230.
  • the parameter determining unit 113 determines parameters using the environmental information acquired by the second acquiring unit 112. If the environmental information is sound data collected by the microphone 214, the parameter determining unit 113 analyzes the type and volume of the ambient sound from the sound data collected by the microphone 214. Then, the parameter determination unit 113 determines parameters based on the analysis results. Further, when the environmental information is at least one of the running position and speed of the vehicle C, and the image outside the vehicle captured by the camera 54, the parameter determining unit 113 determines the type and volume of the ambient sound based on these pieces of information. Estimate. Then, the parameter determination unit 113 determines parameters based on the estimation results.
  • the type and volume of ambient sound are estimated, for example, as follows.
  • the parameter determining unit 113 determines the road surface condition, predicted traffic volume, and surrounding environment (busy town, residential area, mountainous area, etc.) at the traveling position of vehicle C from the map data MP. Get the least one of the information. Further, the parameter determining unit 113 may obtain real-time traffic volume or weather information around the traveling position of the vehicle C via the network N. Further, when an image of the outside of the vehicle captured by the camera 54 is acquired, the parameter determination unit 113 detects at least one of the traffic volume, weather conditions, road surface conditions, and surrounding environment around the traveling position of the vehicle C by image analysis. .
  • the parameter determining unit 113 estimates the type and volume of ambient sound of the vehicle-mounted audio device 20A.
  • the parameter determining unit 113 estimates the volume of the traveling sound of the vehicle C based on the traveling speed. Generally, the faster the vehicle travels, the louder the sound of the vehicle travels.
  • the volume adjustment process is a process for adjusting the volume of each sound when two or more sounds are output simultaneously.
  • the volume adjustment process is executed, for example, when outputting a system sound such as a guidance sound from the navigation device 52 or a warning sound from the safety system of the vehicle C while outputting a sound such as a song.
  • a system sound such as a guidance sound from the navigation device 52 or a warning sound from the safety system of the vehicle C
  • the parameter determining unit 113 selects the distribution sound as a parameter.
  • the volume of the data Dsn and the volume of the guidance audio data Dsa are determined. More specifically, the parameter determining unit 113 determines each volume so that the volume of the music based on the distribution sound data Dsn is smaller than the volume of the guidance voice.
  • the presence or absence of sound output from the speakers 230A to 230F may be changed based on whether or not an occupant is in the seats P1 to P4.
  • the second acquisition unit 112 acquires information indicating the presence or absence of an occupant in each seat P1 to P4 of the vehicle C as second information.
  • the information indicating the presence or absence of an occupant may be, for example, an image captured by the camera 54 inside the vehicle, or may be a detection result of a seating sensor (not shown) provided in each of the seats P1 to P4.
  • the parameter determining unit 113 sets the volume output from the speaker 230 corresponding to the seat P where no passenger is sitting to zero or lower than normal.
  • the format conversion process is a process of converting the input sound data Di into a format that can be played by the in-vehicle audio device 20A.
  • a dedicated application is used for the distribution service provided by the distribution server 30.
  • the user needs to install dedicated applications for each, and perform tasks such as updating the dedicated applications as necessary.
  • the distributed sound data Dsn can be used without installing a dedicated application for each distribution service on the in-vehicle audio device 20A.
  • formats that can be played by the in-vehicle audio device 20A include, for example, the above-mentioned MP3, AAC, FLAC, WAV-PCM, and the like. More preferably, the format that can be played back by the in-vehicle audio device 20A (format after format conversion processing) is FLAC (lossless compression) or WAV-PCM (uncompressed), which has a light decompression (decoding, decoding) processing load and does not degrade sound quality. ) is preferred.
  • the parameter determination unit 113 performs a format conversion process based on information regarding the format of the input sound data Di, which is first information, and information that specifies the format of sound data that can be played by the in-vehicle audio device 20A, which is included in the information M. Determine whether or not it is necessary. Specifically, when the input sound data Di is in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines that format conversion processing is unnecessary. Furthermore, if the input sound data Di is not in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines to convert the input sound data Di into a format that can be played by the in-vehicle audio device 20. That is, the parameter determining unit 113 determines, as parameters, whether format conversion processing is necessary for the input sound data Di, and the destination format if format conversion processing is necessary.
  • the output sound generation unit 114 processes the input sound data Di using the parameters determined by the parameter determination unit 113 to generate output sound data Do used in the in-vehicle audio device 20A.
  • the output sound generation unit 114 generates output sound data by performing at least one of acoustic adjustment processing, environment adaptation processing, volume adjustment processing, or format conversion processing on the input sound data Di acquired by the first acquisition unit 111. Generate Do.
  • the first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N.
  • the first transmission control section 115 is an example of a data transmission control section.
  • the output sound data Do transmitted by the first transmission control section 115 is received by the reception control section 254 of the in-vehicle audio device 20A.
  • the change reception unit 116 receives parameter changes from the user using the in-vehicle audio device 20A while the output sound data Do is being transmitted.
  • the change reception unit 116 is an example of a reception unit.
  • the parameter change may be, for example, a change in volume, or a relationship between a frequency band and gain in an equalizer (such as increasing the bass range).
  • the change accepting unit 116 accepts a parameter change
  • the parameter determining unit 113 changes the parameters used for audio processing to the parameters set by the user. As a result, audio processing is performed in accordance with changes made by the user.
  • the change receiving unit 116 receives a parameter change, the content of the change is reflected in the subsequent audio processing.
  • the user changes the parameters while transmitting the first output sound data Do, which is an example of the output sound data Do.
  • the first output sound data Do is data generated by performing acoustic processing on the first input sound data Di, which is an example of the input sound data Di.
  • the change reception unit 116 associates the identification information of the in-vehicle audio device 20A, the identification information of the first input sound data Di, and the parameters before and after the change, and stores them in the storage device 102 as user setting data US. do.
  • the parameter determination unit 113 When transmitting output sound data Do based on the first input sound data Di to the in-vehicle audio device 20A from the next time onward, the parameter determination unit 113 reads the parameters changed by the user from the user setting data US. The output sound generation unit 114 performs acoustic processing using the parameters read from the user setting data US to generate output sound data Do.
  • parameter changes made by the user are not limited to being reflected in the first input sound data Di itself, but may also be reflected in the same type of input sound data Di as the first input sound data Di.
  • the first input sound data Di is music data, and the music genre is rock.
  • acoustic processing may be performed using the parameters changed by the user.
  • the parameter values changed by the user may be used for subsequent acoustic processing of the guidance voice data Dsa.
  • the audio server 10 may aggregate parameter changes received from users of each of the plurality of vehicle-mounted audio devices 20A to 20N, and reflect the results in parameter determination by the parameter determination unit 113. For example, if many users make similar parameter changes to the output sound data Do generated based on the second input sound data Di, which is an example of the input sound data Di, the parameter determination unit There is a possibility that the parameters determined by 113 do not match the preferences of many users. In this case, the parameter determination unit 113 determines the parameters to be used for the acoustic processing of the second input sound data Di to be the parameters changed by many users. This makes it possible to realize audio processing that reflects the tastes or trends of many users.
  • A-3-2 In-vehicle audio device 20
  • the control device 216 of the in-vehicle audio device 20 functions as a vehicle information transmitting section 251, a setting accepting section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255.
  • the vehicle information transmitter 251 transmits at least either the acoustic characteristic information of the vehicle-mounted audio device 20A or the environmental information of the vehicle C to the acoustic server 10. At least either the acoustic characteristic information of the in-vehicle audio device 20A or the environmental information of the vehicle C transmitted from the vehicle information transmitting section 251 is acquired by the second acquiring section 112 of the acoustic server 10.
  • the setting receiving unit 252 selects a desired distribution sound from among the plurality of distribution sound data Dsn distributed by the distribution server 30. Accepts selection of data Dsn. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
  • the setting receiving unit 252 receives changes in sound processing parameters from the user using the in-vehicle audio device 20A while outputting sound based on the output sound data Do.
  • the parameter change may be, for example, a change in volume, or the relationship between the frequency band and gain in an equalizer (such as making the bass range larger), or other changes. It may also be a change in the parameters.
  • the setting reception unit 252 transmits the contents of the parameter change received from the user to the audio server 10.
  • the second transmission control unit 253 transmits the local sound data Dsl to the audio server 10.
  • the local sound data Dsl transmitted by the second transmission control unit 253 includes, for example, acquired sound data Dsy or stored sound data Dsm that is instructed to be output by the user, system sound data Dss output from the vehicle ECU 50, and output from the navigation device 52. This is the guidance audio data Dsa.
  • the reception control unit 254 receives the output sound data Do from the audio server 10 via the network N.
  • the output sound data Do is data obtained by performing acoustic processing on the local sound data Dsl, or data obtained by performing acoustic processing on the distributed sound data Dsn.
  • the output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the amplifier 220.
  • Amplifier 220 amplifies output sound data Do and outputs it to speaker 230.
  • the speaker 230 outputs sound based on the output sound data Do.
  • FIG. 7 is a flowchart showing the operation of the control device 103 of the sound server 10.
  • various data may be transmitted and received in file units or in packet units.
  • the control device 103 functions as a first acquisition unit 111 and acquires input sound data Di from the in-vehicle sound device 20A or the distribution server 30 (step S20).
  • the control device 103 functions as a second acquisition unit 112 and acquires at least one of first information on the attributes of the input sound data Di and second information on the in-vehicle sound device 20A (step S21).
  • the control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S22).
  • the control device 103 functions as an output sound generation unit 114, and generates output sound data Do used in the in-vehicle audio device 20A by performing acoustic processing on the input sound data Di using the parameters determined in step S22 ( Step S23).
  • the control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S24).
  • the control device 103 functions as the second acquisition unit 112 and acquires environmental information, which is an example of second information, from the in-vehicle audio device 20A (step S25).
  • Control device 103 determines whether the ambient sound of vehicle-mounted audio device 20A has changed based on the environmental information (step S26). If the ambient sound does not change (step S26: NO), the control device 103 advances the process to step S28. On the other hand, if the ambient sound has changed (step S26: YES), the control device 103 functions as the parameter determining unit 113, and changes the sound processing parameters to match the changed ambient sound (step S27). Note that if the environmental information is not acquired in step S21 or if the environmental information is not used to determine the parameters in step S22, the control device 103 may skip the processing from steps S25 to S27.
  • the control device 103 functions as the change reception unit 116, and receives changes to the parameters of the sound processing from the user (step S28). If there is no parameter change from the user (step S28: NO), the control device 103 advances the process to step S30. On the other hand, if a parameter change is accepted from the user (step S28: YES), the control device 103 functions as the parameter determining unit 113, and changes the parameter according to the user's change (step S29). Until the transmission of the output sound data Do based on the input sound data Di acquired in step S20 is completed (step S30: NO), the control device 103 returns the process to step S23. When the transmission of the output sound data Do is completed (step S30: YES), the control device 103 returns the process to step S20.
  • the acoustic server 10 generates the output sound data Do by performing acoustic processing on the input sound data Di, and generates the output sound data Do. Do is transmitted to the in-vehicle audio device 20A. Therefore, it is not necessary to arrange a control device for performing sound processing in the in-vehicle audio device 20A. Therefore, the configuration of the vehicle-mounted audio device 20A is simplified, and as a result, the cost of the vehicle-mounted audio device 20A is reduced.
  • the acoustic server 10 determines parameters used for acoustic processing on the input sound data Di based on at least one of first information regarding the attributes of the input sound data Di and second information regarding the in-vehicle audio device 20A. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data Do is improved.
  • the acoustic server 10 acquires information indicating the acoustic characteristics of the in-vehicle audio device 20A as second information. Therefore, since the acoustic characteristics of the in-vehicle audio device 20A are reflected in the acoustic processing parameters, the audio server 10 can perform acoustic processing suitable for using the output sound data Do in the in-vehicle audio device 20A.
  • the acoustic server 10 acquires environmental information of the in-vehicle audio device 20A as second information. Therefore, since the ambient sound of the in-vehicle audio device 20A is reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where ambient sounds are generated. Can be done.
  • the acoustic server 10 continuously acquires environmental information while transmitting the output sound data Do, and re-determines the parameters when the environmental information changes. Therefore, since changes in the ambient sound are reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where the ambient sound changes from moment to moment. .
  • the sound server 10 also determines whether or not the input sound data Di needs to be converted into a format, and if so, what format to convert it to. This allows the user to use the input sound data Di in the in-vehicle sound device 20A without being aware of the format of the data.
  • the audio server 10 accepts changes to audio processing parameters from the user. Therefore, the audio server 10 can perform audio processing that reflects the user's preferences or situations that are not reflected in the first information or the second information.
  • the audio server 10 obtains input sound data Di from the distribution server 30 that distributes sound data via the network N. Therefore, the user can use various sound data other than the local sound data Dsl of the in-vehicle audio device 20A on the in-vehicle audio device 20A, and the user's convenience can be improved.
  • the audio server 10 acquires input sound data Di from the in-vehicle audio device 20A. Therefore, since the audio server 10 can perform acoustic processing on the sound data acquired from the in-vehicle audio device 20A, the processing load on the in-vehicle audio device 20A is reduced compared to the case where the in-vehicle audio device 20A performs audio processing. is reduced.
  • the audio server 10 acquires, as input sound data Di, at least one of the sound data stored in the in-vehicle audio device 20A and the sound data output from the equipment connected to the in-vehicle audio device 20A.
  • the audio server 10 performs audio processing on at least one of the sound data stored in the vehicle-mounted audio device 20A and the sound data output from the equipment connected to the vehicle-mounted audio device 20A. Therefore, the audio server 10 can improve user convenience.
  • the audio server 10 generates output sound data Do used by the in-vehicle audio device 20A that outputs sound into the cabin of the vehicle C. Therefore, the acoustic server 10 can improve the sound quality of the sound output into the cabin of the vehicle C, which has a poor sound listening environment unlike inside a building.
  • FIG. 8 is a block diagram showing the configuration of the acoustic server 10 in the second embodiment.
  • the control device 103 of the acoustic server 10 functions as a vehicle sound generation section 117 in addition to the functional configuration in the first embodiment.
  • the vehicle sound generation unit 117 generates vehicle sound data indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Further, the first transmission control unit transmits the vehicle sound data to the in-vehicle audio device 20A via the network N.
  • the vehicle sound data includes, for example, [1] engine sound data Dse indicating a virtual engine sound, or [2] alarm sound data Dsk notifying surrounding obstacles or the like.
  • Engine sound data Dse For example, when the vehicle C is an electric vehicle powered by an electric motor, a virtual engine sound may be output in order to evoke a sense of driving in the user riding the vehicle C.
  • the vehicle sound generation unit 117 generates engine sound data Dse corresponding to the engine sound output from the vehicle-mounted audio device 20A.
  • the second acquisition unit 112 acquires the traveling speed information and accelerator opening information of the vehicle C from the in-vehicle audio device 20A as the second information.
  • the traveling speed information is an example of information indicating the traveling state of the vehicle C.
  • the accelerator opening degree information is an example of information indicating the operating state of the vehicle C.
  • engine sound data Dse is stored in the storage device 102 of the acoustic server 10.
  • the engine sound data Dse includes a plurality of engine sound data Dse_1 to Dse_25 (see FIG. 9).
  • the vehicle sound generation unit 117 selects one engine sound data Dse from among the plurality of engine sound data Dse_1 to Dse_25.
  • the first transmission control unit 115 transmits the selected engine sound data Dse to the in-vehicle audio device 20A as vehicle sound data.
  • the vehicle sound generation unit 117 determines the virtual engine rotation speed of the vehicle C (hereinafter referred to as "virtual engine rotation speed") based on the traveling speed information of the vehicle C.
  • the vehicle sound generation unit 117 determines the virtual engine rotation speed based on reference information (not shown) indicating the correspondence between the traveling speed of the vehicle C and the virtual engine rotation speed, for example.
  • the vehicle sound generation unit 117 selects one engine sound data Dse from the plurality of engine sound data Dse_1 to Dse_25 based on the virtual engine rotation speed and accelerator opening information.
  • FIG. 9 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. Map F is stored in the storage device 102, for example. Although FIG. 9 shows a case where the number of engine sound data Dse is 25, the number of engine sound data Dse is not limited to 25. Vehicle sound generation unit 117 identifies one piece of engine sound data Dse in map F that corresponds to a region where the virtual engine speed of vehicle C and the accelerator opening information intersect.
  • Vehicle sound generation section 117 reads out one piece of engine sound data Dse from storage device 102 .
  • the first transmission control unit 115 transmits one piece of engine sound data Dse read out by the vehicle sound generation unit 117 to the vehicle-mounted audio device 20A as vehicle sound data.
  • Alarm sound data Dsk In the first embodiment, various alarm sounds accompanying the travel of the vehicle C were included in the system sound data Dss output from the vehicle ECU 50.
  • the vehicle sound generation unit 117 generates the alarm sound data Dsk based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. As shown in FIG. 8, alarm sound data Dsk is stored in the storage device 102 of the acoustic server 10.
  • the alarm sound data Dsk includes a plurality of alarm sound data Dsk.
  • the vehicle sound generation unit 117 selects one alarm sound data Dsk from among the plurality of alarm sound data Dsk.
  • the first transmission control unit 115 transmits the selected alarm sound data Dsk to the vehicle-mounted audio device 20A as vehicle sound data.
  • the second acquisition unit 112 acquires information indicating the operating state of the shift lever as information indicating the operating state of the vehicle C.
  • the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is moving backward from the alarm sound data Dsk, and generates the vehicle sound. It is transmitted as data to the in-vehicle audio device 20A.
  • the second acquisition unit 112 acquires traveling speed information of the vehicle C as information indicating the traveling state of the vehicle C.
  • the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is overspeeding from among the plurality of alarm sound data Dsk, and It is transmitted as sound data to the in-vehicle audio device 20A.
  • At least one of the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C may be used for acoustic processing of the input sound data Di.
  • information indicating the operating position of the shift lever is acquired as information indicating the operating state of vehicle C, and that the shift lever is operated in reverse (R).
  • R the shift lever is operated in reverse
  • the volume of sound data other than the system sound may be reduced.
  • the running speed information of the vehicle C is acquired as information indicating the running state of the vehicle C, and that the vehicle C is accelerating.
  • the volume of the speaker 230 may be increased in response to the increase in the running sound of the vehicle C.
  • the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C are not limited to being acquired from the in-vehicle audio device 20A.
  • the operation state of the vehicle C or the running state of the vehicle C may be detected from an image of a surveillance camera placed on a road or an image of a camera mounted on another vehicle C.
  • the acoustic server 10 acquires at least one of the information indicating the operation state of the vehicle C and the information indicating the running state of the vehicle C as the second information. Therefore, the audio server 10 can reflect at least one of the operating state of the vehicle C and the running state of the vehicle C in the audio processing of the input sound data Di.
  • the audio server 10 generates a vehicle sound indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Generate data. Therefore, the acoustic server 10 can reduce the processing load on the vehicle ECU 50 compared to the case where the vehicle ECU 50 generates vehicle sound data.
  • FIG. 10 is a block diagram showing the configuration of the in-vehicle sound device 20A in the third embodiment.
  • the control device 216 of the in-vehicle sound device 20A functions as a setting acceptance unit 252, a reception control unit 254, and an output control unit 255.
  • the control device 216 of the in-vehicle sound device 20A does not function as a vehicle information transmission unit 251 and a second transmission control unit 253. That is, in the third embodiment, the local sound data Dsl is not transmitted from the in-vehicle sound device 20A to the sound server 10. Also, in the third embodiment, the acoustic characteristic information and environmental information are not transmitted from the in-vehicle sound device 20A to the sound server 10.
  • the in-vehicle audio device 20A includes an audio control device 240 between the control device (main control device) 216 and the amplifier 220.
  • the sound control device 240 is configured with a processor having lower performance than the control device 216 of the head unit 200.
  • the sound control device 240 adjusts the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl.
  • the sound based on the local sound data Dsl is at least one of the sound based on the sound data stored in the vehicle-mounted audio device 20A and the sound based on the sound data output by the device connected to the vehicle-mounted audio device 20A.
  • the setting reception unit 252 selects desired distributed sound data Dsn from among the plurality of distributed sound data Dsn distributed by the distribution server 30. accept. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
  • the reception control unit 254 receives the output sound data Do from the acoustic server 10 via the network N.
  • the output sound data Do is data obtained by performing acoustic processing on the distribution sound data Dsn.
  • the output sound generation unit 114 of the audio server 10 performs format conversion processing on the input sound data Di to generate output sound data Do. More specifically, the output sound generation unit 114 of the audio server 10 generates the output sound data Do by converting the input sound data Di, which is the distributed sound data Dsn, into a format that can be played by the audio control device 240.
  • the output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the audio control device 240.
  • the acoustic control device 240 reproduces the output sound data Do and outputs it to the amplifier 220.
  • Amplifier 220 amplifies output sound data Do and outputs it to speaker 230.
  • the speaker 230 outputs sound based on the output sound data Do.
  • the output control unit 255 outputs the local sound data Dsl to the audio control device 240.
  • the sound control device 240 processes each sound data so as to adjust the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl, and outputs the processed sound data to the amplifier 220. Specifically, when outputting the guidance sound of the navigation device 52 (guidance sound based on the guidance sound data Dsa) while outputting the playback sound based on the output sound data Do, the sound control device 240 adjusts the volume of the playback sound. The guidance voice is output after making it smaller.
  • the in-vehicle audio device 20A includes a sound control device 240 that adjusts the balance between the sound based on the local sound data Dsl and the sound based on the output sound data Do.
  • the audio server 10 generates output sound data Do by converting the input sound data Di into a format that can be played by the audio control device 240.
  • the user can use the distributed sound data Dsn without being aware of the format that can be played by the audio control device 240. Further, since it is not necessary to install a dedicated application for each distribution server 30 and to update the version of the dedicated application, for example, user convenience can be improved.
  • FIG. 11 is a diagram illustrating the configuration of the information processing system 2 according to the fourth embodiment. Similar to the information processing system 1, the information processing system 2 includes a plurality of in-vehicle audio devices 20 (20A to 20N). Information processing system 2 includes an audio server 10, an in-vehicle audio device 20A, and a smartphone 40. In the fourth embodiment, the audio server 10 is an example of an information processing device that performs processing regarding a plurality of output sound data Do used in each of the plurality of vehicle-mounted audio devices 20A to 20N.
  • the audio server 10 determines the parameters used for audio processing, and performs audio processing on the input sound data Di to generate the output sound data Do.
  • the audio server 10 determines the parameters used for audio processing, and also determines the audio processing device that performs the audio processing.
  • the sound processing device can be, for example, at least one of the sound server 10, the smartphone 40, and the vehicle-mounted sound device 20A.
  • the acoustic server 10, smartphone 40, and vehicle-mounted audio device 20A that are candidates for the acoustic processing device will be referred to as "candidate devices.”
  • the smartphone 40 and the in-vehicle audio device 20A are capable of performing at least some of the above-described audio processing.
  • the control device of the smartphone 40 and the control device 216 of the in-vehicle audio device 20A have a program installed therein for performing at least some of the audio processing, and use parameters transmitted from the audio server 10 to process the sound. It is possible to perform acoustic processing.
  • the smartphone 40 is an electronic device carried by a user who uses the vehicle C and the in-vehicle audio device 20A.
  • the smartphone 40 is an example of another device different from the audio server 10.
  • the smartphone 40 communicates with the in-vehicle audio device 20A in the vehicle C using short-range wireless communication such as Bluetooth (registered trademark).
  • Bluetooth registered trademark
  • a music distribution application is installed on the smartphone 40, and the distribution sound data Dsn can be acquired from the distribution server 30 (see FIG. 2, etc.).
  • the distributed sound data Dsn acquired by the smartphone 40 is transmitted, for example, to the in-vehicle audio device 20A, and output from the speaker of the in-vehicle audio device 20A.
  • FIG. 12 is a block diagram showing the configuration of the acoustic server 10 in the fourth embodiment.
  • the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, and a first transmission control section 115. 118 and a third transmission control section 119.
  • the second acquisition unit 112 and the parameter determination unit 113 function in the same manner as in the first embodiment.
  • the second acquisition unit 112 acquires at least one of first information regarding the attribute of the input sound data Di and second information regarding the in-vehicle audio device 20A.
  • the second acquisition unit 112 is an example of an information acquisition unit.
  • the parameter determination unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information.
  • the device determining unit 118 determines a sound processing device that performs sound processing on the input sound data Di, based on at least one of the first information and the second information. For example, the device determining unit 118 may obtain, as an example of the first information, the terms of use of the distributed sound data Dsn (whether or not the audio server 10 can obtain the input sound data Di).
  • the terms of use of a distribution service that distributes distributed sound data Dsn it is stipulated that only devices used by users who have registered with the distribution service (for example, smartphone 40 or in-vehicle audio device 20A) can acquire distributed sound data Dsn. There are cases. In this case, the audio server 10 cannot acquire the distributed sound data Dsn and cannot function as an audio processing device.
  • the device determining unit 118 determines the smartphone 40 or the in-vehicle audio device 20A as the sound processing device.
  • the device determining unit 118 may obtain, for example, information indicating the communication status between the in-vehicle audio device 20A and the audio server 10 as the second information. If the communication condition between the in-vehicle audio device 20A and the audio server 10 is poor, there is a possibility that a delay will occur in the transmission of the output sound data Do after the audio processing. Therefore, the device determining unit 118 determines that the in-vehicle audio device 20A or the smartphone 40 performs the audio processing.
  • the device determining unit 118 may determine the sound processing device based on, for example, information regarding the audio server 10, the smartphone 40, and the in-vehicle audio device 20A (hereinafter referred to as "candidate device information").
  • the candidate device information is, for example, information regarding the performance of the candidate device. Specifically, it is, for example, the product number (model number) of the candidate device or the components used in the candidate device (control device, recording device, etc.).
  • the device determining unit 118 determines that the candidate device performs the audio processing when the candidate device has data processing capability capable of performing the audio processing. If the candidate device has low processing capacity, delays may occur if the candidate device performs audio processing that requires a high load. By obtaining information regarding the performance of the candidate device, it is possible to appropriately set the acoustic processing load to be performed by the candidate device.
  • the candidate device information may be, for example, the real-time operating status (processing load) of the candidate device. If the candidate device is performing processing with a high load other than audio processing, if audio processing is further imposed, there is a possibility that processing will be delayed. Therefore, the device determining unit 118 may determine, among the candidate devices, the candidate device with the smallest current processing load as the audio processing device.
  • the sound processing device is the sound server 10
  • the sound processing device is a device other than the sound server 10
  • the sound processing device is a plurality of devices
  • the first acquiring unit 111 acquires the input sound data Di.
  • the output sound generation unit 114 performs acoustic processing on the input sound data Di using the parameters determined by the parameter determination unit 113, thereby generating output sound data Do used in the vehicle-mounted audio device 20A.
  • the first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N.
  • the third transmission control unit 119 uses the parameters determined by the parameter determining unit 113. to other devices.
  • the third transmission control section 119 is an example of a parameter transmission control section.
  • the input sound data Di may be transmitted to another device together with the parameters.
  • Other devices generate output sound data Do by subjecting input sound data Di to acoustic processing using parameters.
  • the smartphone 40 is a sound processing device
  • the output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A.
  • the acoustic processing includes a plurality of processes (steps)
  • a series of processes may be shared among a plurality of devices. This makes it possible to avoid concentration of processing load on a specific device.
  • the audio server 10 may perform some of the plurality of processes, and the remaining processes may be performed by another device (for example, the smartphone 40).
  • the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process. It is assumed that the device determining unit 118 determines that the acoustic server 10 executes the first process and that the smartphone 40 executes the second process.
  • the smartphone 40 is an example of a device other than the audio server 10.
  • the output sound generation unit 114 performs first processing on the input sound data Di using the first parameter to generate partially processed data.
  • the first transmission control unit 115 transmits the partially processed data and the second parameter to the smartphone 40.
  • the smartphone 40 performs second processing on the partially processed data using the second parameter, and generates output sound data Do.
  • the output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A, and is output from the speaker 230 of the in-vehicle audio device 20A.
  • the plurality of devices that share sound processing may be the smartphone 40 and the vehicle-mounted audio device 20A.
  • the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process.
  • the device determining unit 118 determines that the first process will be performed by the smartphone 40, and determines that the second process will be performed by the in-vehicle audio device 20A.
  • the smartphone 40 is an example of a first sound processing device
  • the in-vehicle audio device 20A is an example of a second sound processing device.
  • the third transmission control unit 119 transmits the first parameter to the smartphone 40 and the second parameter to the in-vehicle audio device 20A.
  • the input sound data Di may be transmitted to the smartphone 40 together with the parameters.
  • the smartphone 40 performs a first process on the input sound data Di using the first parameter, and generates partially processed data.
  • the smartphone 40 transmits the partially processed data to the in-vehicle audio device 20A.
  • the in-vehicle audio device 20A performs second processing on the partially processed data using the second parameter, and generates output sound data Do.
  • the output sound data Do is output from the speaker 230 of the vehicle-mounted audio device 20A.
  • Both the first parameter and the second parameter may be transmitted to the smartphone 40. In this case, the second parameter is transmitted to the in-vehicle audio device 20A together with the partially processed data.
  • FIG. 13 is a flowchart showing the operation of the control device 103 of the sound server 10 in the fourth embodiment.
  • various data may be transmitted and received in file units or in packet units.
  • the control device 103 functions as the second acquisition unit 112, and acquires at least one of first information related to the attributes of the input sound data Di and second information related to the in-vehicle sound device 20A (step S50).
  • the control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S51). Further, the control device 103 functions as the device determining unit 118, and determines a sound processing device based on at least one of the first information and the second information (step S52).
  • step S53 NO
  • the control device 103 functions as the third transmission control unit 119
  • the parameters determined in step S51 are transmitted to another device (smartphone 40 or in-vehicle audio device 20A) (step S54). After that, the control device 103 returns the process to step S50.
  • the control device 103 functions as the first acquisition unit 111 and acquires the input sound data Di (step S55).
  • the control device 103 functions as the output sound generation section 114, and the control device 103 functions as the output sound generation section 114,
  • the output sound data Do is generated by subjecting the input sound data Di to acoustic processing using the parameters obtained (step S57).
  • the control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S58). After that, the control device 103 returns the process to step S50.
  • the control device 103 functions as the output sound generation unit 114, Partial audio processing (for example, first processing) handled by the audio server 10 is applied to the input sound data Di to generate partially processed data (step S59). At this time, some of the parameters determined in step S51 (for example, the first parameter) are used.
  • the control device 103 functions as the first transmission control unit 115, and transmits the partially processed data and parameters (for example, second parameters) used in the processing handled by the other device to the other device (step S60). ). After that, the control device 103 returns the process to step S50.
  • the audio server 10 determines the audio processing device that performs audio processing on the input sound data Di, based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
  • the processing load on the in-vehicle audio device 20A or the smartphone 40 can be reduced compared to performing the audio processing in the in-vehicle audio device 20A or the smartphone 40.
  • the configuration of the in-vehicle audio device 20A or the smartphone 40 is simplified because the audio server 10 performs the high-load acoustic processing that cannot be handled unless the in-vehicle audio device 20A or smartphone 40 is equipped with a control device dedicated to audio processing. As a result, the cost of the in-vehicle audio device 20A or the smartphone 40 is reduced.
  • the processing load of the audio processing can be distributed. Therefore, concentration of processing load on a specific device is avoided.
  • the sound output device is the vehicle-mounted audio device 20A to 20N
  • the sound output device is not limited to this, and may be any electronic device that can use sound data.
  • the sound output device may be an electronic device that is carried and used by the user.
  • the electronic device carried and used by the user may be, for example, a smartphone, a portable audio player, a personal computer, a tablet terminal, a smart watch, or the like. These electronic devices either have built-in speakers or have external speakers or earphones attached.
  • the second acquisition unit 112 of the acoustic server 10 acquires the output characteristics of the earphones connected to the smartphone as second information.
  • the parameter determining unit 113 determines parameters for sound processing in accordance with the output characteristics of the earphone. Thereby, the quality of the sound output from the earphones can be improved.
  • the second acquisition unit 112 acquires the position information of the earphone (or smartphone) as second information.
  • the parameter determining unit 113 determines parameters for sound processing according to the position of the earphone. Specifically, the parameter determining unit 113 increases the gain in the bass range when the earphone is outdoors, and lowers the volume when the earphone is indoors. As a result, it is possible to perform acoustic processing according to the listening location, and it is possible to improve the sound quality of the sound output from the earphones, as well as improve the ease of hearing the sound output from the earphones. .
  • processing resources conventionally devoted to sound processing can be used for other processing, and it is possible to prevent a decline in smartphone performance due to the use of sound data.
  • the smartphone's control device does not perform audio processing
  • the smartphone's power consumption can be reduced.
  • the smartphone control device does not perform sound processing, there is no need to use a high-spec control device for processing sound data, and costs can be reduced.
  • Functions of the acoustic server 10 are realized by cooperation of one or more processors forming the control device 103 and the program PG1 stored in the storage device 102.
  • the functions of the in-vehicle audio devices 20A to 20N constitute the control device 216. This is realized by the cooperation of one or more processors and the program PG2 stored in the storage device 215.
  • the above program may be provided in a form stored in a computer-readable recording medium and installed on a computer.
  • the recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but any known recording medium such as a semiconductor recording medium or a magnetic recording medium can be used. Also included are recording media in the form of. Note that the non-transitory recording medium includes any recording medium excluding transitory, propagating signals, and does not exclude volatile recording media. Further, in a configuration in which a distribution device distributes a program via the network N, a recording medium that stores a program in the distribution device corresponds to the above-mentioned non-transitory recording medium.
  • At least one of A and B or “at least one of A or B” means “(A), (B), or (A and B)". B).
  • at least one of A and B means “one or more of A and B” or “at least one selected from the group of A and B” ( This is rephrased as at least one selected from the group of A and B).
  • At least one of A, B, and C (“at least one of A, B and C” or “at least one of A, B or C”) means “(A), (B), (C ), (A and B), (A and C), (B and C), or (A, B and C).
  • at least one of A, B, and C means “one or more of A, B, and C,” or "one or more of A, B, and C.” "at least one selected from the group of A, B, and C”.
  • An information processing device is an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and includes data acquisition for acquiring input sound data.
  • an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of second information; and performing the acoustic processing on the input sound data using the parameter.
  • a data transmission control section that transmits the output sound data to the first sound output device via a network.
  • the information processing device generates output sound data by performing acoustic processing on input sound data, and transmits the output sound data to one sound output device. Therefore, it is not necessary to arrange a control device for performing sound processing in one sound output device. Therefore, the configuration of one sound output device is simplified and the cost of one sound output device is reduced. Further, the information processing device determines the parameters used for acoustic processing of the input sound data based on at least one of first information regarding attributes of the input sound data and second information regarding the output of sound from the first sound output device. decide. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data is improved.
  • the information acquisition unit acquires information indicating acoustic characteristics of the one sound output device as the second information.
  • the acoustic characteristics of one sound output device are reflected in the sound processing parameters, so that the information processing device performs sound processing suitable for using output sound data with one sound output device. It can be carried out.
  • the information acquisition unit acquires information regarding sounds generated around the one sound output device as the second information.
  • the sound around the first sound output device is reflected in the sound processing parameters, so that the information processing device can use the output sound data in an environment where sounds are generated in the surroundings. Appropriate acoustic processing can be performed.
  • the information acquisition unit continuously acquires information regarding the sound from the one sound output device during transmission of the output sound data, and the parameter determination unit If the information regarding the sound changes, the parameters are redetermined.
  • the parameters are redetermined.
  • the information acquisition unit acquires information regarding the format of the input sound data as the first information, and can output it as the second information with the one sound output device.
  • the parameter determination unit determines, as the parameters, whether or not format conversion processing of the input sound data is necessary, and a conversion destination format if the format conversion processing is necessary. According to the above configuration, the user can use the input sound data on one sound output device without being aware of the format of the input sound data.
  • the information processing device further includes a reception unit that accepts a change in the parameter from a user using the first sound output device while the output sound data is being transmitted, and the information processing device The unit changes the parameters used for the audio processing to the parameters changed by the user, when the accepting unit accepts the change of the parameters.
  • the information processing device can perform acoustic processing reflecting the user's preference or the situation that is not reflected in the first information or the second information.
  • the one sound output device is an in-vehicle audio device that outputs the sound into the cabin of the vehicle.
  • the information processing device can improve the sound quality of the sound output into the cabin of the vehicle, which has a poor sound listening environment unlike inside a building.
  • the information acquisition unit acquires, as the second information, at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle.
  • the information processing device can reflect at least one of the operation state of the vehicle and the running state of the vehicle in the acoustic processing of input sound data.
  • the information processing device outputs from the one sound output device based on at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle.
  • the apparatus further includes a vehicle sound generation section that generates vehicle sound data representing a sound, and the data transmission control section transmits the vehicle sound data to the one sound output device via the network.
  • the information processing device can reduce the processing load on the control device provided in the vehicle, compared to the case where vehicle sound data is generated by the control device provided in the vehicle.
  • An information processing device is an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices.
  • the information processing device determines a sound processing device that performs sound processing on input sound data based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
  • the information processing device includes a data acquisition unit that acquires the input sound data when the device determination unit determines that the information processing device is the sound processing device; an output sound generation unit that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using parameters;
  • the apparatus further includes a data transmission control section for transmitting data to the sound output device.
  • the configuration of other devices can be simplified, and as a result, the configuration of other devices can be costs are reduced.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • the parameter determining unit determines the first parameter and the second parameter
  • the device determining unit determines that the information processing device executes the first process
  • the device determining unit determines that the information processing device executes the first process
  • the output sound generation unit performs the first processing on the previous input sound data using the first parameter to generate partially processed data.
  • the data transmission control unit transmits the partially processed data and the second parameter to the other device.
  • the device determining unit determines that another device other than the information processing device is the sound processing device
  • the information processing device transmits the parameters to the other device.
  • the apparatus further includes a parameter transmission control section for transmitting parameters. According to the above configuration, the other device determined as the sound processing device is provided with the parameters used for the sound processing, and the sound processing is appropriately executed.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • the parameter determining unit determines the first parameter and the second parameter
  • the device determining unit determines that a first sound processing device executes the first process
  • the parameter transmission control unit transmits the first parameter to the first sound processing device and sends the second parameter to the second sound processing device.
  • the processing load on each device is reduced.
  • the one sound output device is an electronic device that is carried and used by the user. According to the above configuration, the quality of sound output from the electronic device carried and used by the user is improved.
  • An information processing system includes a plurality of sound output devices and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices.
  • the information processing system includes a data acquisition unit that acquires input sound data, first information regarding attributes of the input sound data, and one sound output device among the plurality of sound output devices.
  • an information acquisition unit that acquires at least one of second information about the input sound data, and determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information.
  • a parameter determination section an output sound generation section that performs the acoustic processing on the input sound data using the parameters to generate output sound data to be used in the first sound output device; and a data transmission control section that transmits sound data to the one sound output device.
  • An information processing system includes a plurality of sound output devices and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices.
  • An information processing system comprising: information processing apparatus that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices. an acquisition unit; a parameter determination unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; and the first information and the second information.
  • a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the second information.
  • An information processing method is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, , obtain at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices, and acquire the first information and the second information.
  • the first sound output device determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the above, and performs the acoustic processing on the input sound data using the parameters. and transmits the output sound data to the one sound output device via the network.
  • An information processing method is an information processing method that is realized by a computer and performs processing on multiple output sound data used in each of multiple sound output devices, and obtains at least one of first information related to attributes of input sound data and second information related to one of the multiple sound output devices, determines parameters to be used in sound processing that imparts sound effects to the input sound data based on at least one of the first information and the second information, and determines a sound processing device that performs the sound processing on the input sound data based on at least one of the first information and the second information.
  • the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process.
  • a second parameter, the first parameter and the second parameter are determined, the first process is determined to be executed by the computer, and the second process is executed by another device other than the computer; determine the input sound data, perform the first processing on the input sound data using the first parameter to generate partially processed data, and send the partially processed data to the other device. and the second parameter.
  • Operating device 213... Sound data acquisition device, 214... Microphone, 215...Storage device, 216...Control device, 220...Amplifier, 230 (230A to 230F)...Speaker, 240...Sound control device, 251...Vehicle information transmitting section, 252...Setting reception section, 253...Second transmission control section, 254...Reception control unit, 255...Output control unit, C...Vehicle, Di...Input sound data, N...Network.

Abstract

This information processing device generates a plurality of pieces of output sound data used in each of a plurality of sound output devices. A data acquisition unit acquires input sound data. An information acquisition unit acquires at least one of first information related to the attribute of the input sound data and second information related to the plurality of sound output devices. A parameter determination unit determines, on the basis of at least one of the first information and the second information, a parameter used for acoustic processing that gives an acoustic effect to the input sound data. An output sound generation unit uses the parameter to apply the acoustic processing to the input sound data, thereby generating output sound data that is to be used in one sound output device. A data transmission control unit transmits the output sound data to the one sound output device via a network.

Description

情報処理装置、情報処理システムおよび情報処理方法Information processing device, information processing system, and information processing method
 本開示は、音出力装置で用いられる音データを処理する技術に関する。 The present disclosure relates to a technology for processing sound data used in a sound output device.
 従来、スマートフォン、ポータブルオーディオプレーヤ、または車載音響装置等の音出力装置において、ユーザの嗜好またはユーザの周囲の環境に合わせて音データを処理する技術が知られている。例えば下記特許文献1では、ユーザの近傍に配置された音出力装置(ユーザデバイス)が、クラウド上に配置されたユーザプロファイルにアクセスし、ユーザプロファイルに基づいて音データの処理パラメータを決定する。音出力装置は、処理パラメータを用いて処理した音データに基づく音をユーザに出力する。 BACKGROUND ART Conventionally, in sound output devices such as smartphones, portable audio players, and in-vehicle audio devices, techniques are known for processing sound data in accordance with the user's preferences or the user's surrounding environment. For example, in Patent Document 1 listed below, a sound output device (user device) placed near a user accesses a user profile placed on a cloud, and determines processing parameters for sound data based on the user profile. The sound output device outputs to the user sound based on the sound data processed using the processing parameters.
特開2020-109968号公報JP2020-109968A
 上述した従来技術では、音データに対する処理を音出力装置で行っている。このため、音データの処理を可能とする高性能な制御装置を音出力装置に搭載する必要があり、音出力装置のコストがかさむという課題がある。 In the above-mentioned conventional technology, processing of sound data is performed by a sound output device. Therefore, it is necessary to equip the sound output device with a high-performance control device that can process sound data, which poses a problem in that the cost of the sound output device increases.
 本開示の一つの態様は、音出力装置のコストを低減することを目的とする。 One aspect of the present disclosure aims to reduce the cost of a sound output device.
 以上の課題を解決するために、本開示のひとつの態様に係る情報処理装置は、複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置であって、入力音データを取得するデータ取得部と、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、を備える。 In order to solve the above problems, an information processing device according to one aspect of the present disclosure is an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and which generates input sound data. an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; an output sound generation unit that generates output sound data to be used in the first sound output device by performing acoustic processing; and a data transmission control unit that transmits the output sound data to the first sound output device via a network. and.
 また、本開示のひとつの態様に係る情報処理装置は、複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置であって、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、を備える。 Further, an information processing device according to one aspect of the present disclosure is an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and includes first information regarding attributes of input sound data. and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices, and an information acquisition unit that acquires at least one of second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter used in acoustic processing that imparts an acoustic effect to data; and a sound processing device that performs the acoustic processing on the input sound data based on at least one of the first information and the second information. and a device determining unit that determines the.
 また、本開示のひとつの態様に係る情報処理システムは、複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置と、を備える情報処理システムであって、前記情報処理装置は、入力音データを取得するデータ取得部と、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、を備える。 Further, an information processing system according to one aspect of the present disclosure includes: a plurality of sound output devices; and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices. The information processing device includes a data acquisition unit that acquires input sound data, first information regarding an attribute of the input sound data, and first information regarding one of the plurality of sound output devices. an information acquisition unit that acquires at least one of the second information; and a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information. an output sound generation section that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using the parameters; and a data transmission control section that transmits the data to the first sound output device.
 また、本開示のひとつの態様に係る情報処理システムは、複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置と、を備える情報処理システムであって、前記情報処理装置は、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、を備える。 Further, an information processing system according to one aspect of the present disclosure includes: a plurality of sound output devices; and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices. In the processing system, the information processing device includes an information acquisition unit that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices. a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of the first information and the second information; and a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the information.
 また、本開示のひとつの態様に係る情報処理方法は、コンピュータによって実現され、複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理方法であって、入力音データを取得し、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成し、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信する。 Further, an information processing method according to one aspect of the present disclosure is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, and the information processing method acquires input sound data. and acquiring at least one of first information regarding attributes of the input sound data and second information regarding one of the plurality of sound output devices, and acquiring at least one of the first information and the second information. determining parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on one of the above, and applying the acoustic processing to the input sound data using the parameters; output sound data is generated, and the output sound data is transmitted to the one sound output device via the network.
 また、本開示のひとつの態様に係る情報処理方法は、コンピュータによって実現され、複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理方法であって、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する。 Further, an information processing method according to one aspect of the present disclosure is an information processing method that is realized by a computer and performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, the method comprising: At least one of first information regarding attributes and second information regarding one of the plurality of sound output devices is acquired, and based on at least one of the first information and the second information, the input determining parameters to be used in acoustic processing for imparting acoustic effects to the sound data, and determining a sound processing device for performing the acoustic processing on the input sound data based on at least one of the first information and the second information; .
第1実施形態にかかる情報処理システム1の構成を例示する図である。1 is a diagram illustrating a configuration of an information processing system 1 according to a first embodiment. 入力音データDiと出力音データDoとの関係を示す説明図である。FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do. 配信サーバ30と車載音響装置20Aとの間のデータの流れの一例を模式的に示す図である。FIG. 3 is a diagram schematically showing an example of a data flow between a distribution server 30 and an in-vehicle audio device 20A. 配信サーバ30と車載音響装置20Aとの間のデータの流れの他の例を模式的に示す図である。FIG. 7 is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A. 音響サーバ10の構成を示すブロック図である。1 is a block diagram showing the configuration of an acoustic server 10. FIG. 車載音響装置20Aの構成を示すブロック図である。FIG. 2 is a block diagram showing the configuration of an in-vehicle audio device 20A. 車両Cにおけるスピーカ230の配置を例示する図である。3 is a diagram illustrating the arrangement of speakers 230 in vehicle C. FIG. 音響サーバ10の制御装置103の動作を示すフローチャートである。5 is a flowchart showing the operation of the control device 103 of the audio server 10. FIG. 第2実施形態における音響サーバ10の構成を示すブロック図である。FIG. 11 is a block diagram showing a configuration of a sound server 10 according to a second embodiment. 仮想エンジン回転数とアクセル開度情報とからエンジン音データDseを選択ためのマップFを模式的に示す図である。FIG. 3 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. 第3実施形態における車載音響装置20Aの構成を示すブロック図である。It is a block diagram showing the composition of 20A of vehicle-mounted sound devices in a 3rd embodiment. 第4実施形態にかかる情報処理システム2の構成を例示する図である。FIG. 3 is a diagram illustrating the configuration of an information processing system 2 according to a fourth embodiment. 第4実施形態における音響サーバ10の構成を示すブロック図である。It is a block diagram showing the composition of audio server 10 in a 4th embodiment. 第4実施形態における音響サーバ10の制御装置103の動作を示すフローチャートである。It is a flowchart which shows operation of control device 103 of audio server 10 in a 4th embodiment.
 A:第1実施形態
 A-1:システム構成
 図1は、第1実施形態にかかる情報処理システム1の構成を例示する図である。情報処理システム1は、音響サーバ10と、複数の車載音響装置20(20A~20N)とを含む。音響サーバ10は情報処理装置およびコンピュータの一例であり、車載音響装置20A~20Nは複数の音出力装置の一例である。音響サーバ10と複数の車載音響装置20A~20Nの各々は、ネットワークNに接続されている。ネットワークNは、インターネット等の広域網であってもよいし、施設等の構内網(LAN:Local Area Network)であってもよい。
A: First embodiment A-1: System configuration FIG. 1 is a diagram illustrating the configuration of an information processing system 1 according to a first embodiment. The information processing system 1 includes an audio server 10 and a plurality of in-vehicle audio devices 20 (20A to 20N). The audio server 10 is an example of an information processing device and a computer, and the in-vehicle audio devices 20A to 20N are an example of a plurality of sound output devices. The audio server 10 and each of the plurality of in-vehicle audio devices 20A to 20N are connected to a network N. The network N may be a wide area network such as the Internet, or may be a local area network (LAN) of a facility or the like.
 車載音響装置20A~20Nは、自動車等の車両C(図5参照)に各々搭載され、スピーカ230(図5参照)から車両Cの車室内に音を出力する。複数の車載音響装置20A~20Nの各々は、異なる複数の車両Cに搭載されている。以下の説明では、複数の車載音響装置20A~20Nのうち一の車載音響装置20Aに着目してその機能を説明するが、他の車載音響装置20B~20Nも車載音響装置20Aと同様の機能を有する。車載音響装置20Aは、複数の車載音響装置20A~20Nのうち一の音出力装置の一例である。詳細は後述するが、車載音響装置20A~20Nから出力される音は、例えば楽曲もしくはラジオ放送などの音、ナビゲーション装置52の案内音声、または車両Cの安全システムからの警報音等である。 The in-vehicle audio devices 20A to 20N are each mounted on a vehicle C (see FIG. 5) such as an automobile, and output sound into the cabin of the vehicle C from a speaker 230 (see FIG. 5). Each of the plurality of in-vehicle audio devices 20A to 20N is mounted on a plurality of different vehicles C. In the following explanation, the functions will be explained focusing on one of the plurality of vehicle-mounted sound devices 20A to 20N, but the other vehicle-mounted sound devices 20B to 20N also have the same functions as the vehicle-mounted sound device 20A. have The vehicle-mounted audio device 20A is an example of one sound output device among the plurality of vehicle-mounted audio devices 20A to 20N. Although the details will be described later, the sounds output from the in-vehicle audio devices 20A to 20N are, for example, sounds such as songs or radio broadcasts, guidance sounds from the navigation device 52, or warning sounds from the safety system of the vehicle C.
 音響サーバ10は、複数の車載音響装置20A~20Nの各々において用いられる複数の出力音データDo(図2参照)を生成する。図2は、入力音データDiと出力音データDoとの関係を示す説明図である。図2では、複数の車載音響装置20A~20Nのうち、一の車載音響装置20Aを例にしている。音響サーバ10は、車載音響装置20Aで出力される音の音データを入力音データDiとして取得する。入力音データDiは、車載音響装置20A~20Nから送信されるローカル音データDslと、配信サーバ30から配信される配信音データDsnとの少なくともいずれかを含む。配信サーバ30は、ネットワークNを介して音データを配信するサーバである。音響サーバ10は、入力音データDiに音響効果を付与する音響処理を行うことにより出力音データDoを生成し、車載音響装置20Aに出力音データDoを送信する。出力音データDoの送信を受けた車載音響装置20Aは、出力音データDoに基づく音をスピーカ230から出力する。音響サーバ10は、他の車載音響装置20B~20Nに対しても同様に、出力音データDoを送信する。    The audio server 10 generates a plurality of output sound data Do (see FIG. 2) used in each of the plurality of in-vehicle audio devices 20A to 20N. FIG. 2 is an explanatory diagram showing the relationship between input sound data Di and output sound data Do. In FIG. 2, one vehicle-mounted audio device 20A among the plurality of vehicle-mounted audio devices 20A to 20N is taken as an example. The audio server 10 acquires the sound data of the sound output by the in-vehicle audio device 20A as input sound data Di. The input sound data Di includes at least one of local sound data Dsl transmitted from the in-vehicle audio devices 20A to 20N and distributed sound data Dsn distributed from the distribution server 30. The distribution server 30 is a server that distributes sound data via the network N. The acoustic server 10 generates output sound data Do by performing acoustic processing to add acoustic effects to the input sound data Di, and transmits the output sound data Do to the in-vehicle audio device 20A. The in-vehicle audio device 20A that has received the output sound data Do outputs sound based on the output sound data Do from the speaker 230. The audio server 10 similarly transmits the output sound data Do to the other vehicle-mounted audio devices 20B to 20N.   
 A-2:ハードウェア構成
 A-2-1:音響サーバ10
 図4は、音響サーバ10の構成を示すブロック図である。音響サーバ10は、通信装置101と、記憶装置102と、制御装置103とを備える。
A-2: Hardware configuration A-2-1: Acoustic server 10
FIG. 4 is a block diagram showing the configuration of the audio server 10. The acoustic server 10 includes a communication device 101, a storage device 102, and a control device 103.
 通信装置101は、無線通信または有線通信を用いて他の機器と通信する。本実施形態では、通信装置101は、有線通信を用いてネットワークNに接続可能な通信インターフェースを備え、ネットワークNを介して車載音響装置20A~20Nと通信する。また、通信装置101は、ネットワークNを介して配信サーバ30と通信する。 The communication device 101 communicates with other devices using wireless communication or wired communication. In this embodiment, the communication device 101 includes a communication interface connectable to the network N using wired communication, and communicates with the in-vehicle audio devices 20A to 20N via the network N. Furthermore, the communication device 101 communicates with the distribution server 30 via the network N.
 記憶装置102は、制御装置103が実行するプログラムPG1を記憶する。また、記憶装置102は、地図データMP、車両別音響特性情報DB、およびユーザ設定データUSを記憶する。地図データMPは、各地の地形、道路形状、車線数、道路周辺の施設(森林等を含む)の種別、および時間帯別の予測交通量等の情報の少なくともいずれかを含む。地図データMPは、記憶装置102に記憶されるに限らず、例えば地図データMPを配信する地図データサーバ(図示なし)からネットワークNを介して取得されてもよい。車両別音響特性情報DBおよびユーザ設定データUSの詳細については後述する。 The storage device 102 stores a program PG1 executed by the control device 103. The storage device 102 also stores map data MP, vehicle-specific acoustic characteristic information DB, and user setting data US. The map data MP includes at least one of information such as the topography of each region, the shape of the road, the number of lanes, the type of facilities (including forests, etc.) around the road, and predicted traffic volume by time of day. The map data MP is not limited to being stored in the storage device 102, but may be acquired via the network N from a map data server (not shown) that distributes the map data MP, for example. Details of the vehicle-specific acoustic characteristic information DB and the user setting data US will be described later.
 記憶装置102は、コンピュータによって読み取り可能な記録媒体(例えば、コンピュータによって読み取り可能なnon transitoryな記録媒体)である。記憶装置215は、不揮発性メモリと、揮発性メモリと、を含む。不揮発性メモリは、例えば、ROM(Read Only Memory)、EPROM(Erasable Programmable Read Only Memory)およびEEPROM(Electrically Erasable Programmable Read Only Memory)である。揮発性メモリは、例えば、RAM(Random Access Memory)である。なお、記憶装置102は、音響サーバ10に対して着脱される可搬型の記録媒体、またはネットワークNを介して制御装置103が書込または読出を実行可能な記録媒体(例えばクラウドストレージ)であってもよい。 The storage device 102 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium). Storage device 215 includes nonvolatile memory and volatile memory. Nonvolatile memories include, for example, ROM (Read Only Memory), EPROM (Erasable Programmable Read Only Memory), and EEPROM (Electrically Erasable Memory). programmable Read Only Memory). The volatile memory is, for example, RAM (Random Access Memory). Note that the storage device 102 is a portable recording medium that can be attached to and detached from the audio server 10, or a recording medium that can be written to or read from by the control device 103 via the network N (for example, cloud storage). Good too.
 制御装置103は、音響サーバ10の各要素を制御する単数または複数のプロセッサで構成される。例えば、制御装置103は、CPU(Central Processing Unit)、SPU(Sound Processing Unit)、DSP(Digital Signal Processor)、FPGA(Field Programmable Gate Array)、またはASIC(Application Specific Integrated Circuit)等の1種類以上のプロセッサにより構成される。 The control device 103 is composed of one or more processors that control each element of the audio server 10. For example, the control device 103 includes a CPU (Central Processing Unit), an SPU (Sound Processing Unit), a DSP (Digital Signal Processor), and an FPGA (Field Programming Unit). rammable Gate Array) or ASIC (Application Specific Integrated Circuit), etc. Consists of a processor.
 制御装置103は、記憶装置102に記憶されたプログラムPG1を実行することにより、第1取得部111、第2取得部112、パラメータ決定部113、出力音生成部114、第1送信制御部115および変更受付部116として機能する。第1取得部111、第2取得部112、パラメータ決定部113、出力音生成部114、第1送信制御部115および変更受付部116の詳細については後述する。 By executing the program PG1 stored in the storage device 102, the control device 103 controls the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and It functions as a change reception unit 116. Details of the first acquisition section 111, the second acquisition section 112, the parameter determination section 113, the output sound generation section 114, the first transmission control section 115, and the change reception section 116 will be described later.
 A-2-2:車載音響装置20A
 図5は、車載音響装置20Aの構成を示すブロック図である。図5では車載音響装置20Aを例に挙げて説明するが、車載音響装置20B~20Nも同様の構成を有する。上述のように、車載音響装置20は、車両Cに搭載されている。車載音響装置20は、ヘッドユニット200と、アンプ220と、スピーカ230とを含む。ヘッドユニット200は、例えば車両Cのインストルメントパネルに設けられる。ヘッドユニット200は、通信装置211と、操作装置212と、音データ取得装置213と、マイク214と、記憶装置215と、制御装置216とを備える。
A-2-2: Vehicle-mounted audio device 20A
FIG. 5 is a block diagram showing the configuration of the in-vehicle audio device 20A. In FIG. 5, the vehicle-mounted audio device 20A will be described as an example, but the vehicle-mounted audio devices 20B to 20N have a similar configuration. As described above, the in-vehicle audio device 20 is mounted on the vehicle C. Vehicle-mounted audio device 20 includes a head unit 200, an amplifier 220, and a speaker 230. The head unit 200 is provided in the instrument panel of the vehicle C, for example. The head unit 200 includes a communication device 211 , an operating device 212 , a sound data acquisition device 213 , a microphone 214 , a storage device 215 , and a control device 216 .
 通信装置211は、無線通信を用いてネットワークNに接続可能な広域通信網接続用の通信インターフェースを備え、ネットワークNを介して音響サーバ10と通信する。通信装置211は、音響サーバ10から出力音データDoを受信する。通信装置211は、受信装置の一例である。 The communication device 211 includes a communication interface for wide area communication network connection that can be connected to the network N using wireless communication, and communicates with the acoustic server 10 via the network N. The communication device 211 receives output sound data Do from the audio server 10. Communication device 211 is an example of a receiving device.
 操作装置212は、車両Cのユーザが行う操作を受け付ける。車両Cのユーザとは、例えば車両Cの搭乗者である。本実施形態において、操作装置212はタッチパネルである。操作装置212は、タッチパネルに限らず、種々の操作ボタンを有する操作盤でもよい。 The operating device 212 receives operations performed by the user of the vehicle C. The user of vehicle C is, for example, a passenger of vehicle C. In this embodiment, the operating device 212 is a touch panel. The operation device 212 is not limited to a touch panel, but may be an operation panel having various operation buttons.
 音データ取得装置213は、車載音響装置20で出力する音の音データを取得する。音データ取得装置213は、例えばCD(Compact Disc)またはSDカード等の記録媒体に記憶された音データを読み取る読取装置であってもよい。また、音データ取得装置213は、ラジオ放送またはテレビ放送の受信装置であってもよい。また、音データ取得装置213は、例えば無線通信または有線通信を用いて近傍に位置する電子機器(例えばスマートフォン、ポータブル音楽プレイヤなど)に接続可能な通信装置であってもよい。この場合、音データ取得装置213は、近距離通信(例えばBluetooth(登録商標)、USB(Universal Serial Bus)等)用の通信インターフェースを備え、近傍に位置する機器と通信する。音データ取得装置213で取得された音データを、以下「取得音データDsy」という。 The sound data acquisition device 213 acquires sound data of the sound output by the in-vehicle audio device 20. The sound data acquisition device 213 may be a reading device that reads sound data stored in a recording medium such as a CD (Compact Disc) or an SD card. Furthermore, the sound data acquisition device 213 may be a radio broadcast or television broadcast receiving device. Further, the sound data acquisition device 213 may be a communication device that can be connected to a nearby electronic device (for example, a smartphone, a portable music player, etc.) using, for example, wireless communication or wired communication. In this case, the sound data acquisition device 213 includes a communication interface for short-range communication (for example, Bluetooth (registered trademark), USB (Universal Serial Bus), etc.), and communicates with devices located nearby. The sound data acquired by the sound data acquisition device 213 is hereinafter referred to as "acquired sound data Dsy."
 マイク214は、車両Cの車室内の音を収音し、収音した音の音データ(以下「収音データ」という)を生成する。マイク214で生成された収音データは、ヘッドユニット200の制御装置216に出力される。マイク214は、ヘッドユニット200に設けられるに限らず、例えば車室の複数箇所に複数設けられてもよいし、車外に設けられてもよい。また、マイク214は、ヘッドユニット200に外部接続されてもよい。 The microphone 214 picks up the sound inside the cabin of the vehicle C and generates sound data of the collected sound (hereinafter referred to as "picked-up data"). The sound data generated by the microphone 214 is output to the control device 216 of the head unit 200. The microphones 214 are not limited to being provided in the head unit 200, but may be provided in multiple locations in the vehicle interior, or may be provided outside the vehicle. Additionally, the microphone 214 may be externally connected to the head unit 200.
 記憶装置215は、制御装置216が実行するプログラムPG2を記憶する。記憶装置215は、音データを記憶してもよい。記憶装置215が記憶する音データは、例えば楽曲等を示す音データであってもよいし、ヘッドユニット200の操作時に出力されるシステム音であってもよい。記憶装置215に記憶された音データを、以下「記憶音データDsm」という。 The storage device 215 stores a program PG2 executed by the control device 216. The storage device 215 may also store sound data. The sound data stored in the storage device 215 may be, for example, sound data indicating a song or the like, or may be system sound output when the head unit 200 is operated. The sound data stored in the storage device 215 will be referred to as "stored sound data Dsm" hereinafter.
 記憶装置215は、コンピュータによって読み取り可能な記録媒体(例えば、コンピュータによって読み取り可能なnon transitoryな記録媒体)である。記憶装置215は、不揮発性メモリと、揮発性メモリと、を含む。不揮発性メモリは、例えば、ROM、EPROMおよびEEPROMである。揮発性メモリは、例えば、RAMである。なお、記憶装置215は、車載音響装置20に対して着脱される可搬型の記録媒体、またはネットワークNを介して制御装置216が書込または読出を実行可能な記録媒体(例えばクラウドストレージ)であってもよい。 The storage device 215 is a computer-readable recording medium (for example, a computer-readable non-transitory recording medium). Storage device 215 includes nonvolatile memory and volatile memory. Non-volatile memories are, for example, ROM, EPROM and EEPROM. Volatile memory is, for example, RAM. Note that the storage device 215 is a portable recording medium that can be attached to and removed from the in-vehicle audio device 20, or a recording medium that can be written to or read from by the control device 216 via the network N (for example, cloud storage). It's okay.
 制御装置216は、車載音響装置20の各要素を制御する単数または複数のプロセッサで構成される。例えば、制御装置216は、CPU、SPU、DSP、FPGA、またはASIC等の1種類以上のプロセッサにより構成される。 The control device 216 is composed of one or more processors that control each element of the in-vehicle audio device 20. For example, the control device 216 is configured with one or more types of processors such as a CPU, SPU, DSP, FPGA, or ASIC.
 本実施形態では、制御装置216は、車両ECU(Electronic Control Unit)50、ナビゲーション装置52およびカメラ54と接続されている。車両ECU50は、車両Cの動作を制御する。より詳細には、車両ECU50は、図示しないステアリングホイール、シフトレバー、アクセルペダル、およびブレーキペダル等の車両Cの操作機構への操作状態に基づいて、エンジンまたはモータ等の車両Cの駆動機構およびブレーキ等の制動機構を制御する。車両ECU50は、車両Cのシステム音データDssを制御装置216に出力する。例えば、車両ECU50は、シフトレバーがリバース(R)に操作されている場合に、車両Cが後退していることを示す警報音をシステム音データDssとして出力する。また、例えば、車両ECU50は、車両Cの走行速度が制限速度を超過している場合に、速度超過していることを示す警報音をシステム音データDssとして出力する。 In this embodiment, the control device 216 is connected to a vehicle ECU (Electronic Control Unit) 50, a navigation device 52, and a camera 54. Vehicle ECU 50 controls the operation of vehicle C. More specifically, the vehicle ECU 50 controls the drive mechanisms and brakes of the vehicle C, such as the engine or motor, based on the operating states of the operation mechanisms of the vehicle C, such as a steering wheel, shift lever, accelerator pedal, and brake pedal (not shown). etc. to control the braking mechanism. Vehicle ECU 50 outputs system sound data Dss of vehicle C to control device 216. For example, when the shift lever is operated in reverse (R), the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is moving backward as the system sound data Dss. Further, for example, when the traveling speed of the vehicle C exceeds the speed limit, the vehicle ECU 50 outputs an alarm sound indicating that the vehicle C is overspeeding as the system sound data Dss.
 ナビゲーション装置52は、ユーザにより設定された目的地点までの経路を探索し、目的地点までの経路案内を行う。具体例には、例えばナビゲーション装置52は、自装置が備えるディスプレイに、車両Cの現在位置周辺の地図を表示するとともに、当該地図に重畳して車両Cの現在位置を示すマークを表示する。また、ナビゲーション装置52は、目的地点に到達するまでの経路における進行方向をユーザに指示する案内音声を出力する。また、ナビゲーション装置52は、車両Cが走行する道路の制限速度等、交通法規に関する注意を示す案内音声を出力してもよい。ナビゲーション装置52の案内音声は、スピーカ230から出力される。ナビゲーション装置52は、案内音声に対応する音声データである案内音声データDsaを制御装置216に出力する。また、ナビゲーション装置52は、図示しないGPS(Global Positioning System)装置によって生成した車両Cの位置情報を制御装置216に出力してもよい。 The navigation device 52 searches for a route to a destination point set by the user and provides route guidance to the destination point. For example, the navigation device 52 displays a map around the current location of the vehicle C on its own display, and displays a mark indicating the current location of the vehicle C superimposed on the map. Furthermore, the navigation device 52 outputs a guidance voice that instructs the user about the direction of travel on the route to reach the destination point. Furthermore, the navigation device 52 may output a guidance voice indicating caution regarding traffic regulations, such as the speed limit of the road on which the vehicle C is traveling. The guidance voice of the navigation device 52 is output from the speaker 230. The navigation device 52 outputs guidance audio data Dsa, which is audio data corresponding to the guidance audio, to the control device 216. Further, the navigation device 52 may output position information of the vehicle C generated by a GPS (Global Positioning System) device, not shown, to the control device 216.
 カメラ54は、車両Cの車室内の画像を撮像し、画像データを生成する。カメラ54で生成された画像データは、制御装置216に出力される。カメラ54は、車室内の画像のみならず、車外の画像を撮像してもよい。カメラ54は、例えば車両Cに搭載されたドライブレコーダーまたは車両Cの安全システムの撮像装置を兼ねていてもよい。 The camera 54 captures an image of the interior of the vehicle C and generates image data. Image data generated by camera 54 is output to control device 216. The camera 54 may capture not only images inside the vehicle interior but also images outside the vehicle. The camera 54 may also serve as, for example, a drive recorder mounted on the vehicle C or an imaging device for a safety system of the vehicle C.
 制御装置216は、プログラムPG2を実行することにより、車両情報送信部251、設定受付部252、第2送信制御部253、受信制御部254および出力制御部255として機能する。車両情報送信部251、設定受付部252、第2送信制御部253、受信制御部254および出力制御部255の詳細は後述する。 The control device 216 functions as a vehicle information transmitting section 251, a setting receiving section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255 by executing the program PG2. Details of the vehicle information transmitting section 251, setting receiving section 252, second transmission controlling section 253, receiving controlling section 254, and output controlling section 255 will be described later.
 アンプ220は、音データを増幅し、増幅した音データをスピーカ230に供給する。第1実施形態では、アンプ220には、制御装置216から出力された出力音データDoが入力される。 The amplifier 220 amplifies the sound data and supplies the amplified sound data to the speaker 230. In the first embodiment, output sound data Do output from the control device 216 is input to the amplifier 220.
 スピーカ230は、出力音データDoに基づく音を出力する。本実施形態では、複数のスピーカ230によりスピーカセットが構成されている。これら複数のスピーカ230の配置は、車両Cの車種またはユーザによるカスタマイズ等により、車両Cによって異なっている。なお、スピーカ230は、単一のスピーカでもよい。 The speaker 230 outputs sound based on the output sound data Do. In this embodiment, a plurality of speakers 230 constitute a speaker set. The arrangement of the plurality of speakers 230 differs depending on the vehicle C depending on the type of vehicle C or customization by the user. Note that the speaker 230 may be a single speaker.
 図6は、車両Cにおけるスピーカ230の配置を例示する図である。車両Cは、座席P1~P4を備える。座席P1および座席P2は、車両Cの車室の前部に設けられた座席である。座席P1は運転席であり、座席P2は助手席である。座席P3および座席P4は、車両Cの車室の後部に設けられた座席である。座席P3は、運転席である座席P1の後方に位置する座席であり、座席P4は、助手席である座席P2の後方に位置する座席である。 FIG. 6 is a diagram illustrating the arrangement of the speakers 230 in the vehicle C. Vehicle C includes seats P1 to P4. Seat P1 and seat P2 are seats provided at the front of the cabin of vehicle C. Seat P1 is the driver's seat, and seat P2 is the passenger's seat. Seat P3 and seat P4 are seats provided at the rear of the cabin of vehicle C. Seat P3 is a seat located behind seat P1, which is a driver's seat, and seat P4 is a seat located behind seat P2, which is a passenger seat.
 また、車両Cは、ドアD1~D4を備える。ドアD1は、座席P1に着席する乗員が乗り降りするためのドアである。なお、乗員はユーザの一例である。ドアD2は、座席P2に着席する乗員が乗り降りするためのドアである。ドアD3は、座席P3に着席する乗員が乗り降りするためのドアである。ドアD4は、座席P4に着席する乗員が乗り降りするためのドアである。 The vehicle C also includes doors D1 to D4. The door D1 is a door through which a passenger seated in the seat P1 gets on and off the vehicle. Note that the passenger is an example of a user. Door D2 is a door through which a passenger seated in seat P2 gets on and off. Door D3 is a door for a passenger seated in seat P3 to get on and off. Door D4 is a door through which a passenger seated in seat P4 gets on and off.
 スピーカ230Aおよび230Bは、ドアD1に設けられる。スピーカ230Cおよびスピーカ230Dは、ドアD2に設けられる。スピーカ230Eは、ドアD3に設けられる。スピーカ230Fは、ドアD4に設けられる。換言すれば、スピーカ230Aおよびスピーカ230Bは、座席P1に対応する箇所に設けられる。また、スピーカ230Cおよびスピーカ230Dは、座席P2に対応する箇所に設けられる。また、スピーカ230Eは、座席P3に対応する箇所に設けられる。また、スピーカ230Fは、座席P4に対応する箇所に設けられる。 Speakers 230A and 230B are provided on door D1. Speakers 230C and 230D are provided on door D2. Speaker 230E is provided on door D3. Speaker 230F is provided on door D4. In other words, speakers 230A and 230B are provided at locations corresponding to seat P1. Speakers 230C and 230D are provided at locations corresponding to seat P2. Speaker 230E is provided at a location corresponding to seat P3. Speaker 230F is provided at a location corresponding to seat P4.
 本実施形態では、スピーカ230から出力される音は、例えば音データ取得装置213で取得された取得音データDsy、記憶装置215に記憶された記憶音データDsm、車両ECU50から出力されるシステム音データDssまたはナビゲーション装置52から出力される案内音声データDsaのうち、少なくとも1つに基づく音を含む。これら取得音データDsy、記憶音データDsm、システム音データDssまたは案内音声データDsaは、車載音響装置20Aに記憶された音データ、および車載音響装置20Aに接続された機器が出力する音データである。取得音データDsy、記憶音データDsm、システム音データDssおよび案内音声データDsaを、以下「ローカル音データDsl」という。 In this embodiment, the sound output from the speaker 230 includes, for example, acquired sound data Dsy acquired by the sound data acquisition device 213, stored sound data Dsm stored in the storage device 215, and system sound data output from the vehicle ECU 50. It includes a sound based on at least one of the guidance audio data Dss and the guidance audio data Dsa output from the navigation device 52. These acquired sound data Dsy, stored sound data Dsm, system sound data Dss, or guidance sound data Dsa are sound data stored in the in-vehicle audio device 20A, and sound data output by a device connected to the in-vehicle audio device 20A. . The acquired sound data Dsy, stored sound data Dsm, system sound data Dss, and guidance sound data Dsa are hereinafter referred to as "local sound data Dsl."
 また、スピーカ230から出力される音は、配信サーバ30が配信する音データに基づく音であってもよい。図3Aは、配信サーバ30と車載音響装置20Aとの間のデータの流れの一例を模式的に示す図である。配信サーバ30は、ネットワークNを介して音データを配信する。配信サーバ30が配信する音データを、以下「配信音データDsn」という。配信サーバ30は、例えば楽曲、環境音、トーク番組、ニュース番組、または語学学習教材等の音を示す配信音データDsnを、ネットワークNを介して配信する。配信サーバ30は、音データを配信するに限らず、音データを含む映像データを配信してもよい。配信サーバ30は、例えば音データ(映像データを含む)を配信する配信サービスの運営事業者が配置している。図3Aでは配信サーバ30を1つ図示しているが、配信サーバ30は複数設けられていてもよい。例えば、複数の音データ配信事業者が、個々に配信サーバ30を設けてもよい。 Furthermore, the sound output from the speaker 230 may be a sound based on sound data distributed by the distribution server 30. FIG. 3A is a diagram schematically showing an example of a data flow between the distribution server 30 and the in-vehicle audio device 20A. Distribution server 30 distributes sound data via network N. The sound data distributed by the distribution server 30 is hereinafter referred to as "distributed sound data Dsn." The distribution server 30 distributes, via the network N, distribution sound data Dsn indicating, for example, the sounds of songs, environmental sounds, talk programs, news programs, or language learning materials. The distribution server 30 is not limited to distributing sound data, but may also distribute video data including sound data. The distribution server 30 is provided, for example, by an operator of a distribution service that distributes audio data (including video data). Although one distribution server 30 is illustrated in FIG. 3A, a plurality of distribution servers 30 may be provided. For example, a plurality of sound data distribution companies may each provide distribution servers 30.
 配信サーバ30から配信音データDsnの配信を受ける場合、ユーザは、配信サーバ30が配信する複数の配信音データDsnの中から、所望の配信音データDsnを選択する。より詳細には、車載音響装置20A(後述する設定受付部252(図5参照))は、配信サーバ30が配信する複数の配信音データDsnのリストを取得し、操作装置212(タッチパネル)に表示する。ユーザは、操作装置212に表示されたリストから、所望の配信音データDsnを選択する。なお、ユーザは、配信音データDsnを具体的に選択するのではなく、配信音データDsnの属性(アーティスト名等の音データの作成者名、音データのジャンル、音データに合うシチュエーション等)を選択してもよい。配信音データDsnが選択されると、車載音響装置20A(設定受付部252)は、配信音データDsnを特定する情報M(例えば楽曲名、属性等)を、通信装置211を介して音響サーバ10に送信する(S11)。情報Mには、車載音響装置20Aで再生可能な音データのフォーマットを特定する情報が含まれてもよい。 When receiving the distribution sound data Dsn from the distribution server 30, the user selects the desired distribution sound data Dsn from among the plurality of distribution sound data Dsn distributed by the distribution server 30. More specifically, the in-vehicle audio device 20A (setting reception unit 252 (see FIG. 5) described later) obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and displays the list on the operating device 212 (touch panel). do. The user selects desired distribution sound data Dsn from the list displayed on the operating device 212. Note that the user does not specifically select the distributed sound data Dsn, but rather selects the attributes of the distributed sound data Dsn (the name of the creator of the sound data such as the artist name, the genre of the sound data, the situation suitable for the sound data, etc.). You may choose. When the distributed sound data Dsn is selected, the in-vehicle audio device 20A (setting reception unit 252) sends information M (for example, song name, attribute, etc.) specifying the distributed sound data Dsn to the audio server 10 via the communication device 211. (S11). The information M may include information specifying the format of sound data that can be reproduced by the in-vehicle audio device 20A.
 音響サーバ10は、情報Mを配信サーバ30に送信する(S12)。配信サーバ30は、情報Mに基づいて、複数の配信音データDsnのうちユーザが要求する配信音データDsnを特定する。配信サーバ30は、特定した配信音データDsnを音響サーバ10に送信する(S13)。音響サーバ10は、配信音データDsnを車載音響装置20Aに送信する(S14)。この時、音響サーバ10は、配信音データDsnに対して音響処理を行った上で車載音響装置20Aに送信する。すなわち、音響サーバ10は、配信音データDsnを入力音データDiとして取得し、音響処理を行った配信音データDsnを出力音データDoとして車載音響装置20Aに送信する。 The sound server 10 transmits information M to the distribution server 30 (S12). Based on information M, the distribution server 30 identifies the distribution sound data Dsn requested by the user from among the multiple distribution sound data Dsn. The distribution server 30 transmits the identified distribution sound data Dsn to the sound server 10 (S13). The sound server 10 transmits the distribution sound data Dsn to the in-vehicle sound device 20A (S14). At this time, the sound server 10 performs acoustic processing on the distribution sound data Dsn before transmitting it to the in-vehicle sound device 20A. That is, the sound server 10 acquires the distribution sound data Dsn as input sound data Di, and transmits the acoustically processed distribution sound data Dsn to the in-vehicle sound device 20A as output sound data Do.
 図3Bは、配信サーバ30と車載音響装置20Aとの間のデータの流れの他の例を模式的に示す図である。図3Bは、配信サーバ30と車載音響装置20Aとが直接接続している。図3Bにおいて、車載音響装置20Aは、配信サーバ30が配信する複数の配信音データDsnのリストを取得し、ユーザから所望の配信音データDsnの選択を受け付ける。配信音データDsnが選択されると、車載音響装置20Aは、配信音データDsnを特定する情報Mを、通信装置211を介して配信サーバ30に送信する(S21)。 FIG. 3B is a diagram schematically showing another example of the data flow between the distribution server 30 and the in-vehicle audio device 20A. In FIG. 3B, the distribution server 30 and the in-vehicle audio device 20A are directly connected. In FIG. 3B, the in-vehicle audio device 20A obtains a list of a plurality of distributed sound data Dsn distributed by the distribution server 30, and receives selection of desired distributed sound data Dsn from the user. When the distribution sound data Dsn is selected, the in-vehicle audio device 20A transmits information M specifying the distribution sound data Dsn to the distribution server 30 via the communication device 211 (S21).
 配信サーバ30は、情報Mに基づいてユーザが要求する配信音データDsnを特定し、特定した配信音データDsnを車載音響装置20Aに送信する(S22)。車載音響装置20Aは、配信音データDsnを音響サーバ10に送信する(S23)。この配信音データDsnが入力音データDiとなる。音響サーバ10は、配信音データDsnに対して音響処理を行い、出力音データDoとして車載音響装置20Aに送信する(S24)。 The distribution server 30 specifies the distribution sound data Dsn requested by the user based on the information M, and transmits the specified distribution sound data Dsn to the in-vehicle audio device 20A (S22). The in-vehicle audio device 20A transmits the distributed sound data Dsn to the audio server 10 (S23). This distributed sound data Dsn becomes input sound data Di. The acoustic server 10 performs acoustic processing on the distributed sound data Dsn, and transmits it to the vehicle-mounted audio device 20A as output sound data Do (S24).
 なお、図3Bにおいて、ステップS21およびS22を実行するのは、車載音響装置20Aではなく、ユーザのスマートフォンであってもよい。スマートフォンを用いることで、例えば車両Cの後部の座席P3または座席P4に乗車した乗員であっても所望の配信音データDsnを選択しやすくなる。 Note that in FIG. 3B, steps S21 and S22 may be executed by the user's smartphone instead of the in-vehicle audio device 20A. By using a smartphone, even a passenger riding in the rear seat P3 or seat P4 of the vehicle C can easily select the desired distributed sound data Dsn.
 A-3:機能的構成
 A-3-1:音響サーバ10
  次に、情報処理システム1を構成する各装置の機能的構成について説明する。図4に示すように、音響サーバ10の制御装置103は、第1取得部111、第2取得部112、パラメータ決定部113、出力音生成部114、第1送信制御部115および変更受付部116として機能する。以下では、音響サーバ10が出力音データDoを提供する一の車載音響装置20が車載音響装置20Aであるものとして説明する。
A-3: Functional configuration A-3-1: Acoustic server 10
Next, the functional configuration of each device making up the information processing system 1 will be explained. As shown in FIG. 4, the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, a first transmission control section 115, and a change reception section 116. functions as In the following description, it is assumed that one in-vehicle audio device 20 to which the audio server 10 provides the output sound data Do is the in-vehicle audio device 20A.
 第1取得部111は、入力音データDiを取得する。第1取得部111は、データ取得部の一例である。入力音データDiは、車載音響装置20Aから出力される音に対応する音データである。本実施形態では、第1取得部111は、以下の2通りの方法で入力音データDiを取得する。 The first acquisition unit 111 acquires input sound data Di. The first acquisition unit 111 is an example of a data acquisition unit. The input sound data Di is sound data corresponding to the sound output from the in-vehicle audio device 20A. In this embodiment, the first acquisition unit 111 acquires the input sound data Di using the following two methods.
[1]車載音響装置20Aから入力音データDiを取得
 車載音響装置20Aのユーザが取得音データDsyまたは記憶音データDsmの利用(例えば楽曲の再生等)を希望する場合、第1取得部111は、ネットワークNを介して車載音響装置20Aから取得音データDsyまたは記憶音データDsmを取得する。また、車両Cのシステム音データDssに対応する音、またはナビゲーション装置52の案内音声データDsaに対応する音を出力する必要がある場合、第1取得部111は、ネットワークNを介して車載音響装置20Aからシステム音データDssまたは案内音声データDsaを取得する。すなわち、第1取得部111は、入力音データDiとして、車載音響装置20Aに記憶された音データおよび車載音響装置20Aに接続された機器が出力する音データの両方またはいずれか一方を取得する。言い換えると、第1取得部111は、入力音データDiとして、ローカル音データDslを取得する。
[1] Acquire input sound data Di from the in-vehicle sound device 20A When a user of the in-vehicle sound device 20A wishes to use the acquired sound data Dsy or the stored sound data Dsm (for example, to play music), the first acquisition unit 111 acquires the acquired sound data Dsy or the stored sound data Dsm from the in-vehicle sound device 20A via the network N. In addition, when it is necessary to output a sound corresponding to the system sound data Dss of the vehicle C or a sound corresponding to the guidance voice data Dsa of the navigation device 52, the first acquisition unit 111 acquires the system sound data Dss or the guidance voice data Dsa from the in-vehicle sound device 20A via the network N. That is, the first acquisition unit 111 acquires both or either of the sound data stored in the in-vehicle sound device 20A and the sound data output by the device connected to the in-vehicle sound device 20A as the input sound data Di. In other words, the first acquisition unit 111 acquires the local sound data Dsl as the input sound data Di.
[2]配信サーバ30から入力音データDiを取得
 車載音響装置20Aのユーザが配信音データDsnの利用を希望する場合、第1取得部111は、ネットワークNを介して配信音データDsnを配信する配信サーバ30から入力音データDiを取得する。配信音データDsnの取得方法は、図3Aおよび図3Bを用いて説明した通りである。
[2] Obtain input sound data Di from distribution server 30 If the user of the in-vehicle audio device 20A desires to use the distribution sound data Dsn, the first acquisition unit 111 distributes the distribution sound data Dsn via the network N. Input sound data Di is acquired from the distribution server 30. The method for acquiring the distributed sound data Dsn is as described using FIGS. 3A and 3B.
 第2取得部112は、入力音データDiの属性に関する第1情報、および、複数の車載音響装置20のうち一の車載音響装置20Aに関する第2情報の両方またはいずれか一方を取得する。第2取得部112は、情報取得部の一例である。以下、<1>第1情報と、<2>第2情報の詳細について説明する。 The second acquisition unit 112 acquires both or one of first information regarding the attribute of the input sound data Di and second information regarding one of the vehicle-mounted audio devices 20A among the plurality of vehicle-mounted audio devices 20. The second acquisition unit 112 is an example of an information acquisition unit. Details of <1> first information and <2> second information will be described below.
<1>第1情報
 第1情報は、入力音データDiの属性に関する情報である。入力音データDiの属性とは、例えば<1-1>入力音データDiのフォーマットに関する情報、および、<1-2>音の内容に関する情報の両方またはいずれか一方を含む。<1-2>音の内容に関する情報は、例えば、入力音データDiの曲名、アーティスト名、または音楽ジャンルの少なくとも一つを示す。
<1> First Information The first information is information regarding attributes of the input sound data Di. The attributes of the input sound data Di include, for example, <1-1> information regarding the format of the input sound data Di and/or <1-2> information regarding the content of the sound. <1-2> Information regarding the content of the sound indicates, for example, at least one of the song title, artist name, or music genre of the input sound data Di.
<1-1>入力音データDiのフォーマットに関する情報
 入力音データDiのフォーマットに関する情報とは、入力音データDiのフォーマットを特定する情報である。主な音データのフォーマットとして、例えばMP3(MPEG-1 Audio Layer-3;非可逆圧縮)、AAC(Advanced Audio Coding;非可逆圧縮)、FLAC(Free Lossless Audio Codec;可逆圧縮)、およびWAV-PCM(非圧縮PCMデータのWaveformフォーマット化)等が知られている。例えば記憶音データDsmと、案内音声データDsaとでフォーマットが異なる場合がある。また、例えば配信サーバ30が配信する配信音データDsnに関して、音データの配信サービス毎にフォーマットが異なる場合がある。第2取得部112は、第1情報として、入力音データDiのフォーマットに関する情報を取得する。具体的には、第2取得部112は、例えば第1取得部111が取得した入力音データDiの拡張子に基づいて、入力音データDiのフォーマットを判断する。
<1-1> Information on the format of the input sound data Di Information on the format of the input sound data Di is information that specifies the format of the input sound data Di. For example, MP3 (MPEG-1 Audio Layer-3; lossy compression), AAC (Advanced Audio Coding; lossy compression), FLAC (Free Lossless Audio Codec; lossless compression), and WAV-PCM (Waveform formatting of uncompressed PCM data) are known as main sound data formats. For example, the format of the stored sound data Dsm and the guidance voice data Dsa may be different. In addition, for example, the format of the distribution sound data Dsn distributed by the distribution server 30 may differ for each distribution service of sound data. The second acquisition unit 112 acquires information on the format of the input sound data Di as the first information. Specifically, the second acquisition unit 112 determines the format of the input sound data Di based on the extension of the input sound data Di acquired by the first acquisition unit 111, for example.
<1-2>入力音データDiの曲名、アーティスト名、音楽ジャンル等の音の内容に関する情報
 入力音データDiの曲名、アーティスト名、音楽ジャンル等の情報は、例えば入力音データDiが楽曲データの場合に、入力音データDiのメタデータとして付加されている。第2取得部112は、例えば第1取得部111が取得した入力音データDiのメタデータに基づいて、入力音データDiの曲名、アーティスト名、または音楽ジャンル等の音の内容に関する情報を取得する。
<1-2> Information regarding the sound content of the input sound data Di, such as the song title, artist name, music genre, etc. Information such as the song title, artist name, music genre, etc. of the input sound data Di is, for example, if the input sound data Di is the music data. In this case, it is added as metadata to the input sound data Di. The second acquisition unit 112 acquires information regarding the sound content of the input sound data Di, such as the song title, artist name, or music genre, based on the metadata of the input sound data Di acquired by the first acquisition unit 111, for example. .
<2>第2情報
 第2情報は、車載音響装置20Aに関する情報である。車載音響装置20Aに関する情報とは、例えば<2-1>車載音響装置20Aの音響特性を示す情報(以下「音響特性情報」という)、または<2-2>車載音響装置20Aの置かれた環境に関する情報(以下「環境情報」という)、の少なくともいずれかを含む。
<2> Second Information The second information is information about the in-vehicle acoustic device 20A. The information about the in-vehicle acoustic device 20A includes, for example, at least one of <2-1> information indicating the acoustic characteristics of the in-vehicle acoustic device 20A (hereinafter referred to as "acoustic characteristic information") and <2-2> information about the environment in which the in-vehicle acoustic device 20A is placed (hereinafter referred to as "environmental information").
<2-1>車載音響装置20Aの音響特性情報
 第2取得部112は、第2情報として、車載音響装置20Aの音響特性情報を取得する。本実施形態において、車載音響装置20Aの音響特性情報とは、車載音響装置20Aから所定の音データに基づく音を出力した際にユーザにどのような音が聴取されるかを示す情報である。車載音響装置20Aの音響特性情報には、車載音響装置20Aの性能に関する情報と、車載音響装置20A(スピーカ230)から出力された音がユーザに聴取されるまでの空間(車室空間)に関する情報とを含む。
<2-1> Acoustic characteristic information of the vehicle-mounted audio device 20A The second acquisition unit 112 acquires acoustic characteristic information of the vehicle-mounted audio device 20A as second information. In this embodiment, the acoustic characteristic information of the vehicle-mounted audio device 20A is information indicating what kind of sound is heard by the user when the vehicle-mounted audio device 20A outputs sound based on predetermined sound data. The acoustic characteristic information of the vehicle-mounted audio device 20A includes information regarding the performance of the vehicle-mounted audio device 20A, and information regarding the space (vehicle space) until the sound output from the vehicle-mounted audio device 20A (speaker 230) is heard by the user. including.
 本実施形態では、第2取得部112は、例えば車載音響装置20Aの音響特性を測定する。より詳細には、第2取得部112は、車載音響装置20Aに対してテスト用の音データを送信する。車載音響装置20Aは、スピーカ230からテスト用の音データに対応する音(以下「テスト音」という)を出力する。例えば座席P1~P4の位置に配置された外付けマイクを用いてテスト音が収音され、収音データが車載音響装置20Aに出力される。なお、外付けマイクを用いるのではなく、ヘッドユニット200のマイク214でテスト音を収音してもよい。車載音響装置20Aは、音響サーバ10に収音データを送信する。第2取得部112は、収音データを取得し、収音データを解析することで、車載音響装置20Aの音響特性を推定する。なお、収音データに基づく音響特性の推定を、車載音響装置20Aが行ってもよい。 In the present embodiment, the second acquisition unit 112 measures, for example, the acoustic characteristics of the in-vehicle audio device 20A. More specifically, the second acquisition unit 112 transmits test sound data to the in-vehicle audio device 20A. The in-vehicle audio device 20A outputs a sound corresponding to the test sound data (hereinafter referred to as "test sound") from the speaker 230. For example, test sounds are collected using external microphones placed at the positions of seats P1 to P4, and the collected sound data is output to the in-vehicle audio device 20A. Note that the test sound may be collected by the microphone 214 of the head unit 200 instead of using an external microphone. The in-vehicle audio device 20A transmits collected sound data to the audio server 10. The second acquisition unit 112 estimates the acoustic characteristics of the in-vehicle audio device 20A by acquiring the collected sound data and analyzing the collected sound data. Note that the in-vehicle audio device 20A may estimate the acoustic characteristics based on the collected sound data.
 音響特性の測定は、例えば車載音響装置20Aの使用に先立って予め行われてもよい。例えば、車載音響装置20Aの車両Cへの取り付け時、または車載音響装置20Aの初回利用時に、テスト音の出力および収音データの送信が行われる。第2取得部112は、収音データを解析し、車載音響装置20Aの音響特性を推定する。第2取得部112は、推定した音響特性を音響特性情報として車両別音響特性情報DB(図4参照)に記録する。この時、第2取得部112は、車載音響装置20Aを識別する識別情報に対応付けて、音響特性情報を車両別音響特性情報DBに記憶する。すなわち、車両別音響特性情報DBは、車載音響装置20Aの識別情報と、当該車載音響装置20Aの音響特性情報とが対応付けられた情報を含む。車両別音響特性情報DBには、例えば車載音響装置20B等、他の車載音響装置20の音響特性情報も記憶されている。以降、第2取得部112は、車載音響装置20Aの音響特性情報を取得する場合には、車載音響装置20Aの識別情報をキーにして車両別音響特性情報DBを検索することで、車載音響装置20Aの音響特性情報を取得することができる。 The measurement of the acoustic characteristics may be performed in advance, for example, prior to using the in-vehicle audio device 20A. For example, when the in-vehicle audio device 20A is attached to the vehicle C or when the in-vehicle audio device 20A is used for the first time, the test sound is output and the collected sound data is transmitted. The second acquisition unit 112 analyzes the collected sound data and estimates the acoustic characteristics of the in-vehicle audio device 20A. The second acquisition unit 112 records the estimated acoustic characteristics as acoustic characteristic information in the vehicle-specific acoustic characteristic information DB (see FIG. 4). At this time, the second acquisition unit 112 stores the acoustic characteristic information in the vehicle-specific acoustic characteristic information DB in association with the identification information for identifying the in-vehicle audio device 20A. That is, the vehicle-specific acoustic characteristic information DB includes information in which the identification information of the vehicle-mounted audio device 20A is associated with the acoustic characteristic information of the vehicle-mounted audio device 20A. The vehicle-specific acoustic characteristic information DB also stores acoustic characteristic information of other vehicle-mounted acoustic devices 20, such as the vehicle-mounted acoustic device 20B. Thereafter, when acquiring the acoustic characteristic information of the in-vehicle acoustic device 20A, the second acquisition unit 112 searches the vehicle-specific acoustic characteristic information DB using the identification information of the in-vehicle acoustic device 20A as a key. Acoustic characteristic information of 20A can be acquired.
 音響特性の測定は、例えば車載音響装置20Aの使用時に毎回行われてもよい。この場合、例えば車両Cの走行開始前に毎回テスト音の出力および収音が行われる。車載音響装置20Aの使用毎に音響特性を測定することにより、車載音響装置20Aの使用毎の車内環境を反映して音響特性を推定することができる。例えば、車両Cにおける乗員の人数および乗車位置は、その時々で異なる場合がある。車載音響装置20Aの使用毎に音響特性の推定を行うことにより、乗員の身体による音の吸収および反射の影響を音響特性に反映させることができる。 The measurement of the acoustic characteristics may be performed, for example, every time the in-vehicle audio device 20A is used. In this case, for example, the test sound is output and collected every time before the vehicle C starts traveling. By measuring the acoustic characteristics each time the vehicle-mounted audio device 20A is used, it is possible to estimate the acoustic characteristics by reflecting the in-vehicle environment each time the vehicle-mounted audio device 20A is used. For example, the number of occupants and the riding positions in vehicle C may vary from time to time. By estimating the acoustic characteristics each time the in-vehicle audio device 20A is used, the influence of sound absorption and reflection by the occupant's body can be reflected in the acoustic characteristics.
 また、第2取得部112は、テスト音を用いて音響特性を測定するのではなく、車載音響装置20Aの性能に関する情報、車両Cのスペックに関する情報および車両Cの乗員に関する情報の少なくともいずれかを、車載音響装置20Aの音響特性情報として取得してもよい。後述するパラメータ決定部113は、車載音響装置20Aの性能に関する情報、車両Cのスペックに関する情報および車両Cの乗員に関する情報を用いてシミュレーションを行うことにより、車載音響装置20の音響特性を推定することができる。 In addition, the second acquisition unit 112 may acquire at least one of information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C as acoustic characteristic information of the in-vehicle acoustic device 20A, rather than measuring the acoustic characteristics using a test sound. The parameter determination unit 113, which will be described later, can estimate the acoustic characteristics of the in-vehicle acoustic device 20 by performing a simulation using the information regarding the performance of the in-vehicle acoustic device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C.
 車載音響装置20Aの性能に関する情報は、例えばヘッドユニット200、スピーカ230およびアンプ220の品番(型番)等である。また、車両Cのスペックに関する情報は、例えば車載音響装置20Aが搭載された車両Cの車種(型番、グレード等を含む)または車室レイアウトである。一般に、車両Cの車種(型番、グレード等を含む)が特定されることで、車室レイアウトおよび車室内に配置された座席Pの素材等が特定可能である。一方で、例えばユーザがスピーカ230を後付けした場合等は、実際の車室レイアウトを特定できる情報を取得するのが好ましい。車室レイアウトとは、例えば車室の寸法、各座席P1~P4の位置、スピーカ230の位置等の情報である。また、車両Cの乗員に関する情報は、乗員の人数、乗車位置(着席する座席P)、乗員の体格等の情報である。 The information regarding the performance of the in-vehicle audio device 20A includes, for example, the product numbers (model numbers) of the head unit 200, the speaker 230, and the amplifier 220. Further, the information regarding the specifications of the vehicle C is, for example, the vehicle type (including model number, grade, etc.) or cabin layout of the vehicle C in which the in-vehicle audio device 20A is mounted. Generally, by specifying the vehicle type (including model number, grade, etc.) of the vehicle C, it is possible to specify the interior layout and the material of the seat P arranged in the vehicle interior. On the other hand, if the user has retrofitted the speaker 230, for example, it is preferable to obtain information that can specify the actual cabin layout. The cabin layout is information such as the dimensions of the cabin, the positions of the seats P1 to P4, and the positions of the speakers 230, for example. Further, the information regarding the occupants of the vehicle C includes information such as the number of occupants, the riding position (seated seat P), and the physique of the occupants.
<2-2>車載音響装置20Aの環境情報
 第2取得部112は、第2情報として、車載音響装置20Aの置かれた環境に関する環境情報を取得する。車載音響装置20Aの置かれた環境とは、例えば車両Cである。環境情報は、例えば車両Cに対する操作状態を示す情報および車両Cに搭載されたセンサ74の検出情報等の車両情報を含む。また、上述した車室レイアウト(車室の寸法、各座席P1~P4の位置、スピーカ56の位置等の情報)、または、車両Cの乗員に関する情報(乗員の人数、乗車位置(着席する座席P)、乗員の体格等の情報)の少なくともいずれかが、環境情報として取得されてもよい。
<2-2> Environmental Information of the Vehicle-Mounted Sound Device 20A The second acquisition unit 112 acquires, as the second information, environmental information related to the environment in which the vehicle-mounted sound device 20A is placed. The environment in which the vehicle-mounted sound device 20A is placed is, for example, the vehicle C. The environmental information includes, for example, information indicating the operation state of the vehicle C and vehicle information such as detection information of the sensor 74 mounted on the vehicle C. In addition, at least one of the above-mentioned vehicle interior layout (information such as the dimensions of the vehicle interior, the positions of the seats P1 to P4, the position of the speaker 56, etc.) or information related to the occupants of the vehicle C (information such as the number of occupants, their riding positions (seats P in which they are seated), and the physiques of the occupants) may be acquired as the environmental information.
 また、環境情報は、例えば車載音響装置20Aの周囲で発生する音(以下、「周囲音」という)に関する情報を含む。本実施形態において、周囲音とは、車載音響装置20Aの周囲で発生する音であり、例えば車両Cの内部または外部で発生する音である。車両Cの内部で発生する音とは、例えば乗員同士の会話音、乗員がスマートフォン等で通話する通話音、乗員がスマートフォン等の電子機器を利用する機器音等である。車両Cの外部で発生する音とは、例えば車両Cが走行することにより発生する走行音、車両Cの周囲の他車両の走行音、車両Cの周囲の環境音(降雨に伴う雨音、風音、歩行者用信号の誘導音等)等である。 Furthermore, the environmental information includes, for example, information regarding sounds generated around the in-vehicle audio device 20A (hereinafter referred to as "ambient sounds"). In this embodiment, the ambient sound is the sound generated around the vehicle-mounted audio device 20A, for example, the sound generated inside or outside the vehicle C. The sounds generated inside the vehicle C include, for example, the sounds of conversations between passengers, the sounds of conversations between passengers using smartphones, etc., the sounds of devices used by passengers using electronic devices such as smartphones, and the like. Sounds generated outside the vehicle C include, for example, the running noise generated by the vehicle C running, the running noise of other vehicles around the vehicle C, and the environmental sounds around the vehicle C (rain noise caused by rain, wind, etc.). sound, guidance sound of pedestrian signals, etc.).
 第2取得部112は、環境情報として、例えば車載音響装置20Aのマイク214で収音した音データ(以下「収音データ」という)を取得する。また、第2取得部112は、環境情報として、例えば車両Cの走行位置、走行速度およびカメラ54で撮像した車外の画像の少なくともいずれかを取得してもよい。これらの情報は、周囲音の推定に用いられる。 The second acquisition unit 112 acquires, for example, sound data collected by the microphone 214 of the in-vehicle audio device 20A (hereinafter referred to as "sound data") as the environmental information. Further, the second acquisition unit 112 may acquire, for example, at least one of the traveling position and traveling speed of the vehicle C, and an image of the outside of the vehicle captured by the camera 54, as the environmental information. This information is used to estimate ambient sound.
 第2取得部112は、車載音響装置20Aから環境情報を取得してもよいし、ネットワークNを介して車両Cを管理する車両管理サーバ(図示なし)から環境情報を取得してもよい。車両管理サーバは、道路上の走行する複数の車両Cから、ネットワークNを介して、車両Cの走行状態を示す情報、車両Cに対する操作状態を示す情報および車両Cに搭載されたセンサの検出情報等を取得する。車両管理サーバは、これらの情報を用いて、例えば車両Cにおける自動運転を制御する制御データを生成する。また、車両管理サーバは、これら情報を用いて、例えば道路の混雑状況を推定し、ネットワークNを介して混雑状況を配信してもよい。    The second acquisition unit 112 may acquire the environmental information from the in-vehicle audio device 20A, or may acquire the environmental information from a vehicle management server (not shown) that manages the vehicle C via the network N. The vehicle management server receives information indicating the driving state of the vehicle C, information indicating the operation state of the vehicle C, and detection information of the sensor mounted on the vehicle C from a plurality of vehicles C traveling on the road via the network N. etc. to obtain. The vehicle management server generates control data for controlling automatic driving in vehicle C, for example, using this information. Further, the vehicle management server may use this information to estimate, for example, the road congestion situation and distribute the congestion situation via the network N.   
 車両Cの周囲音は、刻々と変化する。このため、第2取得部112は、出力音データDoの送信中、環境情報を車載音響装置20Aから継続して取得する。後述するパラメータ決定部113は、環境情報が変化した場合、音響処理のパラメータを再決定する。 The ambient sound of vehicle C changes every moment. Therefore, the second acquisition unit 112 continuously acquires environmental information from the in-vehicle audio device 20A while transmitting the output sound data Do. A parameter determining unit 113, which will be described later, re-determines the parameters of the sound processing when the environmental information changes.
 パラメータ決定部113は、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに音響効果を付与する音響処理に用いられるパラメータを決定する。一般に、入力音データDiに対して行う音響処理が決定すると、当該音響処理を行うために決定すべき「パラメータの種類」が1以上特定される。パラメータ決定部113が決定する「パラメータ」とは、1以上の「パラメータの種類」に対応する具体的な「パラメータの値」を意味する。例えば、「パラメータの種類」が音量の場合、音量の大きさを指定する音量値(以下では、単に「音量」という場合がある)が「パラメータの値」である。 The parameter determining unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information. Generally, when the acoustic processing to be performed on the input sound data Di is determined, one or more "types of parameters" to be determined in order to perform the acoustic processing are specified. The "parameter" determined by the parameter determining unit 113 means a specific "parameter value" corresponding to one or more "parameter types." For example, when the "parameter type" is the volume, the "parameter value" is a volume value that specifies the volume of the volume (hereinafter sometimes simply referred to as "volume").
 第1実施形態において、音響サーバ10が入力音データDiに対して行う音響処理は、[A]音響調整処理、[B]環境適応処理、[C]音量調整処理および[D]フォーマット変換処理の少なくともいずれかを含む。 In the first embodiment, the acoustic processing that the acoustic server 10 performs on the input sound data Di includes [A] acoustic adjustment processing, [B] environment adaptation processing, [C] volume adjustment processing, and [D] format conversion processing. Contains at least one of them.
[A]音響調整処理
 音響調整処理は、車載音響装置20から出力される音の音質を向上させる処理である。音響調整処理は、例えば、本来は車載オーディオ用のDSPが実行する各種の処理である。車両Cの車室内はスペースが限られており、ユーザと各スピーカ230との距離が異なっている。また、窓ガラスによる音の反射および座席Pによる音の吸収などが生じやすく、音質が低下しやすい状況にある。音響調整処理は、座席Pに着席した乗員による聴取に最適化するように、車載音響装置20から出力される音を調整する処理である。
[A] Sound Adjustment Process The sound adjustment process is a process for improving the sound quality of the sound output from the in-vehicle audio device 20. The sound adjustment process is, for example, various processes that are originally executed by a DSP for in-vehicle audio. The space inside the vehicle C is limited, and the distances between the user and each speaker 230 are different. In addition, sound is likely to be reflected by the window glass and sound is absorbed by the seat P, resulting in a situation where sound quality is likely to deteriorate. The sound adjustment process is a process of adjusting the sound output from the in-vehicle audio device 20 so as to optimize the listening by the occupant seated in the seat P.
 音響調整処理は、具体的には、例えばタイムアライメント、イコライザ、またはクロスオーバー等が挙げられる。タイムアライメントは、各々のスピーカ230から音を出力するタイミングを変化させて、車両Cの乗員(主に運転席に座るユーザ)に音のフォーカスを合わせる処理である。タイムアライメントを実行する場合、パラメータの種類は、例えば各々のスピーカ230における音の出力タイミング(例えば基準となるスピーカ230に対する他のスピーカ230の遅延量など)である。イコライザは、周波数帯域ごとにゲイン(入力信号の増幅)を上下させて音のバランスを整える処理である。イコライザを実行する場合、パラメータの種類は、例えば各周波数帯におけるゲインである。クロスオーバーは、各々のスピーカ230に割り振る出力周波数帯域を調整する処理である。クロスオーバーを実行する場合、パラメータの種類は、例えば各々のスピーカ230に対して割り振る周波数帯である。 Specific examples of the sound adjustment processing include time alignment, equalizer, crossover, and the like. Time alignment is a process of changing the timing at which sound is output from each speaker 230 to focus the sound on the occupant of vehicle C (mainly the user sitting in the driver's seat). When performing time alignment, the type of parameter is, for example, the output timing of sound in each speaker 230 (for example, the amount of delay of other speakers 230 with respect to the reference speaker 230). An equalizer is a process that adjusts the sound balance by increasing or decreasing the gain (amplification of the input signal) for each frequency band. When executing the equalizer, the type of parameter is, for example, gain in each frequency band. Crossover is a process of adjusting the output frequency band allocated to each speaker 230. When performing a crossover, the type of parameter is, for example, a frequency band to be allocated to each speaker 230.
 音響調整処理を行う場合、パラメータ決定部113は、第2取得部112が取得した車載音響装置20Aの音響特性情報を用いてパラメータを決定する。音響特性情報が、測定により取得された情報である場合、パラメータ決定部113は、音響特性情報から直接パラメータを決定することができる。また、音響特性情報が、車載音響装置20Aの性能に関する情報および車両Cのスペックに関する情報および車両Cの乗員に関する情報の少なくともいずれかである場合、パラメータ決定部113は、これらの情報に基づいてシミュレーションを行って車載音響装置20の音響特性を推定し、パラメータを決定する。 When performing the acoustic adjustment process, the parameter determination unit 113 determines parameters using the acoustic characteristic information of the in-vehicle audio device 20A acquired by the second acquisition unit 112. When the acoustic characteristic information is information acquired through measurement, the parameter determination unit 113 can directly determine the parameters from the acoustic characteristic information. Further, when the acoustic characteristic information is at least one of information regarding the performance of the in-vehicle audio device 20A, information regarding the specifications of the vehicle C, and information regarding the occupants of the vehicle C, the parameter determining unit 113 performs simulation based on these information. are performed to estimate the acoustic characteristics of the in-vehicle audio device 20 and determine parameters.
 また、音響調整処理のうちイコライザについては、第1情報である入力音データDiの曲名、アーティスト名または音楽ジャンル等の情報に基づいてパラメータが決定されてもよい。パラメータ決定部113は、例えば入力音データDiの音楽ジャンルに基づいてイコライザの処理パラメータを決定する。例えば、音楽ジャンルがロックの場合、パラメータ決定部113は、エレキギターサウンドに対応する高音域と、キックおよびベースサウンドに対応する低音域の音量を相対的に大きくする。また、音楽ジャンルがポップミュージックの場合、パラメータ決定部113は、ボーカルサウンドに対応する中音域の音量を相対的に大きくする。すなわち、イコライザ処理を実行する場合、パラメータの種類は、例えば周波数帯ごとの相対的な音量である。 Furthermore, regarding the equalizer in the sound adjustment process, parameters may be determined based on information such as the song title, artist name, music genre, etc. of the input sound data Di, which is the first information. The parameter determining unit 113 determines equalizer processing parameters based on, for example, the music genre of the input sound data Di. For example, if the music genre is rock, the parameter determination unit 113 relatively increases the volume of the high range corresponding to the electric guitar sound and the volume of the low range corresponding to the kick and bass sounds. Further, when the music genre is pop music, the parameter determining unit 113 relatively increases the volume of the midrange corresponding to the vocal sound. That is, when performing equalizer processing, the type of parameter is, for example, relative volume for each frequency band.
[B]環境適応処理
 環境適応処理は、車載音響装置20Aの周囲音に基づいて、音データの音量を調整する処理である。例えば車両Cの周囲で工事を行っており騒音が発生している場合、または車両Cの走行速度が高速で走行音が大きい場合、パラメータ決定部113は、スピーカ230から出力する音の音量を大きくする。また、例えば車室内で搭乗者同士が会話している場合、パラメータ決定部113は、スピーカ230から出力する音を小さくしてもよい。また、パラメータ決定部113は、周囲音の高さ(周波数)に合わせて、スピーカ230から出力する音の周波数を変更してもよい。すなわち、環境適応処理を実行する場合、パラメータの種類は、例えばスピーカ230から出力する音の音量、または、スピーカ230から出力する音の周波数帯である。
[B] Environment Adaptation Process The environment adaptation process is a process of adjusting the volume of sound data based on the ambient sound of the in-vehicle audio device 20A. For example, if construction is being carried out around vehicle C and noise is generated, or if vehicle C is running at a high speed and the running noise is loud, the parameter determination unit 113 increases the volume of the sound output from the speaker 230. do. Further, for example, when the passengers are talking with each other inside the vehicle, the parameter determining unit 113 may reduce the sound output from the speaker 230. Further, the parameter determination unit 113 may change the frequency of the sound output from the speaker 230 in accordance with the height (frequency) of the surrounding sound. That is, when executing the environment adaptation process, the type of parameter is, for example, the volume of the sound output from the speaker 230 or the frequency band of the sound output from the speaker 230.
 環境適応処理を行う場合、パラメータ決定部113は、第2取得部112が取得した環境情報を用いてパラメータを決定する。環境情報が、マイク214で収音した音データの場合、パラメータ決定部113は、マイク214で収音した音データから周囲音の種類および音量を解析する。そして、パラメータ決定部113は、解析結果に基づいてパラメータを決定する。また、環境情報が、車両Cの走行位置、走行速度およびカメラ54で撮像した車外の画像の少なくともいずれかである場合、パラメータ決定部113は、これらの情報に基づいて、周囲音の種類および音量を推定する。そして、パラメータ決定部113は、推定結果に基づいてパラメータを決定する。 When performing environment adaptation processing, the parameter determining unit 113 determines parameters using the environmental information acquired by the second acquiring unit 112. If the environmental information is sound data collected by the microphone 214, the parameter determining unit 113 analyzes the type and volume of the ambient sound from the sound data collected by the microphone 214. Then, the parameter determination unit 113 determines parameters based on the analysis results. Further, when the environmental information is at least one of the running position and speed of the vehicle C, and the image outside the vehicle captured by the camera 54, the parameter determining unit 113 determines the type and volume of the ambient sound based on these pieces of information. Estimate. Then, the parameter determination unit 113 determines parameters based on the estimation results.
 周囲音の種類および音量の推定は、例えば以下のように行う。例えば、車両Cの走行位置を取得した場合、パラメータ決定部113は、地図データMPから、車両Cの走行位置の路面状態、予測交通量および周囲の環境(繁華街、住宅地、山間部等)の少なくいずれかの情報を取得する。また、パラメータ決定部113は、車両Cの走行位置周辺のリアルタイムの交通量または気象情報を、ネットワークNを介して取得してもよい。また、カメラ54で撮像した車外の画像を取得した場合、パラメータ決定部113は、車両Cの走行位置周辺の交通量、気象状況、路面状態および周囲の環境の少なくともいずれかを画像解析により検出する。これらの情報から、パラメータ決定部113は、車載音響装置20Aの周囲音の種類および音量を推定する。車両Cの走行速度を取得した場合、パラメータ決定部113は、走行速度に基づいて車両Cの走行音の音量を推定する。一般に、走行速度が速いほど走行音の音量は大きくなる。 The type and volume of ambient sound are estimated, for example, as follows. For example, when the traveling position of vehicle C is acquired, the parameter determining unit 113 determines the road surface condition, predicted traffic volume, and surrounding environment (busy town, residential area, mountainous area, etc.) at the traveling position of vehicle C from the map data MP. Get the least one of the information. Further, the parameter determining unit 113 may obtain real-time traffic volume or weather information around the traveling position of the vehicle C via the network N. Further, when an image of the outside of the vehicle captured by the camera 54 is acquired, the parameter determination unit 113 detects at least one of the traffic volume, weather conditions, road surface conditions, and surrounding environment around the traveling position of the vehicle C by image analysis. . From this information, the parameter determining unit 113 estimates the type and volume of ambient sound of the vehicle-mounted audio device 20A. When the traveling speed of the vehicle C is acquired, the parameter determining unit 113 estimates the volume of the traveling sound of the vehicle C based on the traveling speed. Generally, the faster the vehicle travels, the louder the sound of the vehicle travels.
[C]音量調整処理
 音量調整処理は、2以上の音が同時に出力される場合において、各々の音の音量の調整する処理である。音量調整処理は、例えば楽曲等の音を出力中に、ナビゲーション装置52の案内音声または車両Cの安全システムの警報音等のシステム系の音を出力する場合に実行される。具体的には、例えば配信音データDsnに基づく楽曲を出力中に、ナビゲーション装置52の案内音声(案内音声データDsaに基づく案内音声)を出力する場合、パラメータ決定部113は、パラメータとして、配信音データDsnの音量および案内音声データDsaの音量を決定する。より詳細には、パラメータ決定部113は、配信音データDsnに基づく楽曲の音量が案内音声の音量よりも小さくなるように、各音量を決定する。
[C] Volume Adjustment Process The volume adjustment process is a process for adjusting the volume of each sound when two or more sounds are output simultaneously. The volume adjustment process is executed, for example, when outputting a system sound such as a guidance sound from the navigation device 52 or a warning sound from the safety system of the vehicle C while outputting a sound such as a song. Specifically, for example, when outputting the guidance voice of the navigation device 52 (guidance voice based on the guidance voice data Dsa) while outputting music based on the distribution sound data Dsn, the parameter determining unit 113 selects the distribution sound as a parameter. The volume of the data Dsn and the volume of the guidance audio data Dsa are determined. More specifically, the parameter determining unit 113 determines each volume so that the volume of the music based on the distribution sound data Dsn is smaller than the volume of the guidance voice.
 また、音響調整処理の一例として、座席P1~P4への乗員の乗車の有無に基づいて、スピーカ230A~230Fからの音の出力の有無が変更されてもよい。この場合、第2取得部112は、第2情報として車両Cの各座席P1~P4における乗員の有無を示す情報を取得する。乗員の有無を示す情報は、例えばカメラ54で車室内を撮像した画像であってもよいし、各座席P1~P4に設けられた着座センサ(図示なし)の検出結果であってもよい。パラメータ決定部113は、乗員が座っていない座席Pに対応するスピーカ230から出力する音量をゼロまたは通常よりも小さくする。 Furthermore, as an example of the sound adjustment process, the presence or absence of sound output from the speakers 230A to 230F may be changed based on whether or not an occupant is in the seats P1 to P4. In this case, the second acquisition unit 112 acquires information indicating the presence or absence of an occupant in each seat P1 to P4 of the vehicle C as second information. The information indicating the presence or absence of an occupant may be, for example, an image captured by the camera 54 inside the vehicle, or may be a detection result of a seating sensor (not shown) provided in each of the seats P1 to P4. The parameter determining unit 113 sets the volume output from the speaker 230 corresponding to the seat P where no passenger is sitting to zero or lower than normal.
[D]フォーマット変換処理
 フォーマット変換処理は、入力音データDiを車載音響装置20Aで再生可能なフォーマットに変換する処理である。上述のように音データには様々なフォーマットが存在し、入力音データDiが車載音響装置20Aで再生可能なフォーマットでない可能性がある。また、一般に、配信サーバ30が提供する配信サービスには、専用のアプリケーションが用いられる。複数の配信サービスを利用する場合、ユーザは、それぞれの専用アプリケーションをインストールし、必要に応じて専用アプリケーションのバージョンアップ等の作業を行う必要がある。フォーマット変換処理を行うことによって、車載音響装置20Aに配信サービス毎の専用アプリケーションをインストールすることなく、配信音データDsnを利用することができる。よって、専用アプリケーションのインストールおよびバージョンアップの煩雑な作業が不要となるとともに、複数の配信サービスをシームレスに利用することが可能となる。なお、車載音響装置20Aで再生可能なフォーマットとは、例えば上述したMP3、AAC、FLAC、WAV-PCM等である。より好ましくは、車載音響装置20Aで再生可能なフォーマット(フォーマット変換処理後のフォーマット)は、伸長(復号、デコード)処理負荷が軽く、音質劣化のないFLAC(可逆圧縮)またはWAV-PCM(非圧縮)が好適である。
[D] Format Conversion Process The format conversion process is a process of converting the input sound data Di into a format that can be played by the in-vehicle audio device 20A. As described above, there are various formats of sound data, and there is a possibility that the input sound data Di is not in a format that can be reproduced by the in-vehicle audio device 20A. Further, in general, a dedicated application is used for the distribution service provided by the distribution server 30. When using multiple distribution services, the user needs to install dedicated applications for each, and perform tasks such as updating the dedicated applications as necessary. By performing the format conversion process, the distributed sound data Dsn can be used without installing a dedicated application for each distribution service on the in-vehicle audio device 20A. Therefore, the complicated work of installing dedicated applications and version upgrades becomes unnecessary, and it becomes possible to seamlessly use multiple distribution services. Note that formats that can be played by the in-vehicle audio device 20A include, for example, the above-mentioned MP3, AAC, FLAC, WAV-PCM, and the like. More preferably, the format that can be played back by the in-vehicle audio device 20A (format after format conversion processing) is FLAC (lossless compression) or WAV-PCM (uncompressed), which has a light decompression (decoding, decoding) processing load and does not degrade sound quality. ) is preferred.
 パラメータ決定部113は、第1情報である入力音データDiのフォーマットに関する情報と、情報Mに含まれる車載音響装置20Aで再生可能な音データのフォーマットを特定する情報とに基づいて、フォーマット変換処理の要否を判断する。具体的には、入力音データDiが車載音響装置20で再生可能なフォーマットである場合、パラメータ決定部113は、フォーマット変換処理は不要と決定する。また、入力音データDiが車載音響装置20で再生可能なフォーマットでない場合、パラメータ決定部113は、入力音データDiを、車載音響装置20で再生可能なフォーマットに変換するよう決定する。すなわち、パラメータ決定部113は、パラメータとして、入力音データDiのフォーマット変換処理の要否、およびフォーマット変換処理が要の場合の変換先フォーマットを決定する。 The parameter determination unit 113 performs a format conversion process based on information regarding the format of the input sound data Di, which is first information, and information that specifies the format of sound data that can be played by the in-vehicle audio device 20A, which is included in the information M. Determine whether or not it is necessary. Specifically, when the input sound data Di is in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines that format conversion processing is unnecessary. Furthermore, if the input sound data Di is not in a format that can be played by the in-vehicle audio device 20, the parameter determining unit 113 determines to convert the input sound data Di into a format that can be played by the in-vehicle audio device 20. That is, the parameter determining unit 113 determines, as parameters, whether format conversion processing is necessary for the input sound data Di, and the destination format if format conversion processing is necessary.
 出力音生成部114は、パラメータ決定部113で決定されたパラメータを用いて入力音データDiに処理を施すことによって車載音響装置20Aで用いられる出力音データDoを生成する。出力音生成部114は、第1取得部111で取得された入力音データDiに対して、音響調整処理、環境適応処理、音量調整処理またはフォーマット変換処理の少なくともいずれかを行うことにより出力音データDoを生成する。 The output sound generation unit 114 processes the input sound data Di using the parameters determined by the parameter determination unit 113 to generate output sound data Do used in the in-vehicle audio device 20A. The output sound generation unit 114 generates output sound data by performing at least one of acoustic adjustment processing, environment adaptation processing, volume adjustment processing, or format conversion processing on the input sound data Di acquired by the first acquisition unit 111. Generate Do.
 第1送信制御部115は、ネットワークNを介して、出力音データDoを車載音響装置20Aに送信する。第1送信制御部115は、データ送信制御部の一例である。第1送信制御部115により送信された出力音データDoは、車載音響装置20Aの受信制御部254により受信される。 The first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N. The first transmission control section 115 is an example of a data transmission control section. The output sound data Do transmitted by the first transmission control section 115 is received by the reception control section 254 of the in-vehicle audio device 20A.
 変更受付部116は、出力音データDoの送信中、車載音響装置20Aを使用するユーザからパラメータの変更を受け付ける。変更受付部116は、受付部の一例である。パラメータの変更とは、例えば音量の変更であってもよいし、イコライザにおける周波数帯域とゲインとの関係(低音域をより大きくする等)であってもよい。変更受付部116がパラメータの変更を受け付けた場合、パラメータ決定部113は、音響処理に用いるパラメータをユーザから設定されたパラメータに変更する。これにより、ユーザによる変更に即した音響処理が実行される。 The change reception unit 116 receives parameter changes from the user using the in-vehicle audio device 20A while the output sound data Do is being transmitted. The change reception unit 116 is an example of a reception unit. The parameter change may be, for example, a change in volume, or a relationship between a frequency band and gain in an equalizer (such as increasing the bass range). When the change accepting unit 116 accepts a parameter change, the parameter determining unit 113 changes the parameters used for audio processing to the parameters set by the user. As a result, audio processing is performed in accordance with changes made by the user.
 また、変更受付部116がパラメータの変更を受け付けた場合、当該変更の内容が以降の音響処理に反映される。具体的には、例えば出力音データDoの一例である第1の出力音データDoの送信中にユーザがパラメータを変更したものとする。第1の出力音データDoは、入力音データDiの一例である第1の入力音データDiに対して音響処理を行って生成されたデータである。この場合、変更受付部116は、車載音響装置20Aの識別情報と、第1の入力音データDiの識別情報と、変更前後のパラメータとを対応付けて、ユーザ設定データUSとして記憶装置102に記憶する。パラメータ決定部113は、次回以降、車載音響装置20Aに対して第1の入力音データDiに基づく出力音データDoを送信する場合には、ユーザ設定データUSからユーザによる変更後のパラメータを読み出す。出力音生成部114は、ユーザ設定データUSから読み出されたパラメータを用いて音響処理を行って出力音データDoを生成する。 Further, when the change receiving unit 116 receives a parameter change, the content of the change is reflected in the subsequent audio processing. Specifically, assume that the user changes the parameters while transmitting the first output sound data Do, which is an example of the output sound data Do. The first output sound data Do is data generated by performing acoustic processing on the first input sound data Di, which is an example of the input sound data Di. In this case, the change reception unit 116 associates the identification information of the in-vehicle audio device 20A, the identification information of the first input sound data Di, and the parameters before and after the change, and stores them in the storage device 102 as user setting data US. do. When transmitting output sound data Do based on the first input sound data Di to the in-vehicle audio device 20A from the next time onward, the parameter determination unit 113 reads the parameters changed by the user from the user setting data US. The output sound generation unit 114 performs acoustic processing using the parameters read from the user setting data US to generate output sound data Do.
 なお、ユーザによるパラメータの変更が反映されるのは、第1の入力音データDiそのものに限らず、例えば第1の入力音データDiと同じ種類の入力音データDiであってもよい。例えば第1の入力音データDiが楽曲データであり、その楽曲ジャンルがロックであるものとする。この場合、入力音データDiが第1の入力音データDi以外のロックの楽曲データである場合にも、ユーザによる変更後のパラメータを用いて音響処理が行われるようにしてもよい。また、例えば第1の入力音データDiがナビゲーション装置52の案内音声データDsaの場合、以降の案内音声データDsaに対する音響処理には、ユーザによる変更後のパラメータの値を用いてもよい。 Note that parameter changes made by the user are not limited to being reflected in the first input sound data Di itself, but may also be reflected in the same type of input sound data Di as the first input sound data Di. For example, assume that the first input sound data Di is music data, and the music genre is rock. In this case, even if the input sound data Di is rock music data other than the first input sound data Di, acoustic processing may be performed using the parameters changed by the user. Also, for example, if the first input sound data Di is guidance voice data Dsa of the navigation device 52, the parameter values changed by the user may be used for subsequent acoustic processing of the guidance voice data Dsa.
 また、音響サーバ10は、複数の車載音響装置20A~20Nの各々のユーザから受け付けたパラメータの変更を集計して、パラメータ決定部113におけるパラメータの決定に反映させてもよい。例えば、入力音データDiの一例である第2の入力音データDiに基づいて生成された出力音データDoに対して、多くのユーザが似たようなパラメータの変更を行った場合、パラメータ決定部113が決定したパラメータが多くのユーザの嗜好と合致していない可能性がある。この場合、パラメータ決定部113は、第2の入力音データDiに対する音響処理に用いるパラメータを、多くのユーザによる変更後のパラメータに決定する。これにより、多くのユーザの嗜好またはトレンドを反映した音響処理が実現可能となる。 Additionally, the audio server 10 may aggregate parameter changes received from users of each of the plurality of vehicle-mounted audio devices 20A to 20N, and reflect the results in parameter determination by the parameter determination unit 113. For example, if many users make similar parameter changes to the output sound data Do generated based on the second input sound data Di, which is an example of the input sound data Di, the parameter determination unit There is a possibility that the parameters determined by 113 do not match the preferences of many users. In this case, the parameter determination unit 113 determines the parameters to be used for the acoustic processing of the second input sound data Di to be the parameters changed by many users. This makes it possible to realize audio processing that reflects the tastes or trends of many users.
 A-3-2:車載音響装置20
 図5に示すように、車載音響装置20の制御装置216は、車両情報送信部251、設定受付部252、第2送信制御部253、受信制御部254および出力制御部255として機能する。
A-3-2: In-vehicle audio device 20
As shown in FIG. 5, the control device 216 of the in-vehicle audio device 20 functions as a vehicle information transmitting section 251, a setting accepting section 252, a second transmission controlling section 253, a receiving controlling section 254, and an output controlling section 255.
 車両情報送信部251は、車載音響装置20Aの音響特性情報、または車両Cの環境情報の少なくともいずれかを、音響サーバ10に送信する。車両情報送信部251から送信された車載音響装置20Aの音響特性情報、または車両Cの環境情報の少なくともいずれかは、音響サーバ10の第2取得部112により取得される。 The vehicle information transmitter 251 transmits at least either the acoustic characteristic information of the vehicle-mounted audio device 20A or the environmental information of the vehicle C to the acoustic server 10. At least either the acoustic characteristic information of the in-vehicle audio device 20A or the environmental information of the vehicle C transmitted from the vehicle information transmitting section 251 is acquired by the second acquiring section 112 of the acoustic server 10.
 設定受付部252は、図3Aまたは図3Bに示すように、ユーザが配信音データDsnの利用を希望する場合に、配信サーバ30が配信する複数の配信音データDsnの中から、所望の配信音データDsnの選択を受け付ける。また、設定受付部252は、選択された配信音データDsnを特定する情報Mを音響サーバ10に送信する。 As shown in FIG. 3A or 3B, when the user desires to use the distribution sound data Dsn, the setting receiving unit 252 selects a desired distribution sound from among the plurality of distribution sound data Dsn distributed by the distribution server 30. Accepts selection of data Dsn. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
 また、設定受付部252は、出力音データDoに基づく音の出力中、車載音響装置20Aを使用するユーザから、音響処理のパラメータの変更を受け付ける。上述のように、パラメータの変更とは、例えば音量の変更であってもよいし、あるいはイコライザにおける周波数帯域とゲインとの関係(低音域をより大きくする等)であってもよいし、これ以外のパラメータの変更であってもよい。設定受付部252は、ユーザから受け付けたパラメータの変更の内容を音響サーバ10に送信する。 Further, the setting receiving unit 252 receives changes in sound processing parameters from the user using the in-vehicle audio device 20A while outputting sound based on the output sound data Do. As mentioned above, the parameter change may be, for example, a change in volume, or the relationship between the frequency band and gain in an equalizer (such as making the bass range larger), or other changes. It may also be a change in the parameters. The setting reception unit 252 transmits the contents of the parameter change received from the user to the audio server 10.
 第2送信制御部253は、ローカル音データDslを音響サーバ10に送信する。第2送信制御部253が送信するローカル音データDslは、例えばユーザから出力を指示された取得音データDsyまたは記憶音データDsm、車両ECU50から出力されたシステム音データDss、およびナビゲーション装置52から出力された案内音声データDsaである。 The second transmission control unit 253 transmits the local sound data Dsl to the audio server 10. The local sound data Dsl transmitted by the second transmission control unit 253 includes, for example, acquired sound data Dsy or stored sound data Dsm that is instructed to be output by the user, system sound data Dss output from the vehicle ECU 50, and output from the navigation device 52. This is the guidance audio data Dsa.
 受信制御部254は、ネットワークNを介して音響サーバ10から出力音データDoを受信する。出力音データDoは、ローカル音データDslに対して音響処理が施されたデータ、または、配信音データDsnに対して音響処理が施されたデータである。 The reception control unit 254 receives the output sound data Do from the audio server 10 via the network N. The output sound data Do is data obtained by performing acoustic processing on the local sound data Dsl, or data obtained by performing acoustic processing on the distributed sound data Dsn.
 出力制御部255は、受信制御部254が受信した出力音データDoをアンプ220に出力する。アンプ220は、出力音データDoを増幅し、スピーカ230に出力する。スピーカ230は、出力音データDoに基づく音を出力する。 The output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the amplifier 220. Amplifier 220 amplifies output sound data Do and outputs it to speaker 230. The speaker 230 outputs sound based on the output sound data Do.
 A-4:制御装置103の動作
 図7は、音響サーバ10の制御装置103の動作を示すフローチャートである。以下のフローチャートにおいて、各種データは、ファイル単位で送受信されてもよいし、パケット単位で送受信されてもよい。制御装置103は、第1取得部111として機能し、車載音響装置20Aまたは配信サーバ30から入力音データDiを取得する(ステップS20)。制御装置103は、第2取得部112として機能し、入力音データDiの属性に関する第1情報、および、車載音響装置20Aに関する第2情報の少なくとも一方を取得する(ステップS21)。
A-4: Operation of the control device 103 Fig. 7 is a flowchart showing the operation of the control device 103 of the sound server 10. In the following flowchart, various data may be transmitted and received in file units or in packet units. The control device 103 functions as a first acquisition unit 111 and acquires input sound data Di from the in-vehicle sound device 20A or the distribution server 30 (step S20). The control device 103 functions as a second acquisition unit 112 and acquires at least one of first information on the attributes of the input sound data Di and second information on the in-vehicle sound device 20A (step S21).
 制御装置103は、パラメータ決定部113として機能し、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに対する音響処理に用いられるパラメータを決定する(ステップS22)。制御装置103は、出力音生成部114として機能し、ステップS22で決定したパラメータを用いて入力音データDiに音響処理を施すことによって、車載音響装置20Aで用いられる出力音データDoを生成する(ステップS23)。制御装置103は、第1送信制御部115として機能し、出力音データDoを車載音響装置20Aに送信する(ステップS24)。 The control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S22). The control device 103 functions as an output sound generation unit 114, and generates output sound data Do used in the in-vehicle audio device 20A by performing acoustic processing on the input sound data Di using the parameters determined in step S22 ( Step S23). The control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S24).
 出力音データDoの送信中、制御装置103は、第2取得部112として機能し、第2情報の一例である環境情報を車載音響装置20Aから取得する(ステップS25)。制御装置103は、環境情報に基づいて、車載音響装置20Aの周囲音が変化したか否かを判断する(ステップS26)。周囲音が変化しない場合(ステップS26:NO)、制御装置103は、処理をステップS28に進める。一方、周囲音が変化した場合(ステップS26:YES)、制御装置103は、パラメータ決定部113として機能し、変化した周囲音に適合するように音響処理のパラメータを変更する(ステップS27)。なお、ステップS21で環境情報を取得しない場合、またはステップS22におけるパラメータの決定に環境情報を用いない場合、制御装置103は、ステップS25からS27の処理をスキップしてもよい。 During the transmission of the output sound data Do, the control device 103 functions as the second acquisition unit 112 and acquires environmental information, which is an example of second information, from the in-vehicle audio device 20A (step S25). Control device 103 determines whether the ambient sound of vehicle-mounted audio device 20A has changed based on the environmental information (step S26). If the ambient sound does not change (step S26: NO), the control device 103 advances the process to step S28. On the other hand, if the ambient sound has changed (step S26: YES), the control device 103 functions as the parameter determining unit 113, and changes the sound processing parameters to match the changed ambient sound (step S27). Note that if the environmental information is not acquired in step S21 or if the environmental information is not used to determine the parameters in step S22, the control device 103 may skip the processing from steps S25 to S27.
 また、出力音データDoの送信中、制御装置103は、変更受付部116として機能し、ユーザから音響処理のパラメータの変更を受け付ける(ステップS28)。ユーザからのパラメータの変更がない場合(ステップS28:NO)、制御装置103は、処理をステップS30に進める。一方、ユーザからパラメータの変更を受け付けた場合(ステップS28:YES)、制御装置103は、パラメータ決定部113として機能し、ユーザからの変更に従ってパラメータを変更する(ステップS29)。ステップS20で取得した入力音データDiに基づく出力音データDoの送信が終了するまでは(ステップS30:NO)、制御装置103は、ステップS23に処理を戻す。出力音データDoの送信が終了すると(ステップS30:YES)、制御装置103は、ステップS20に処理を戻す。 Furthermore, while the output sound data Do is being transmitted, the control device 103 functions as the change reception unit 116, and receives changes to the parameters of the sound processing from the user (step S28). If there is no parameter change from the user (step S28: NO), the control device 103 advances the process to step S30. On the other hand, if a parameter change is accepted from the user (step S28: YES), the control device 103 functions as the parameter determining unit 113, and changes the parameter according to the user's change (step S29). Until the transmission of the output sound data Do based on the input sound data Di acquired in step S20 is completed (step S30: NO), the control device 103 returns the process to step S23. When the transmission of the output sound data Do is completed (step S30: YES), the control device 103 returns the process to step S20.
 A-5:第1実施形態まとめ
 以上説明したように、第1実施形態において、音響サーバ10は、入力音データDiに対して音響処理を施すことで出力音データDoを生成し、出力音データDoを車載音響装置20Aに送信する。よって、音響処理を行うための制御装置を車載音響装置20Aに配置しなくてもよい。このため、車載音響装置20Aの構成が簡素化され、この結果、車載音響装置20Aのコストが低減される。また、音響サーバ10は、入力音データDiに対する音響処理に用いられるパラメータを、入力音データDiの属性に関する第1情報、および、車載音響装置20Aに関する第2情報の少なくとも一方に基づいて決定する。よって、音響処理のパラメータが適切に設定されるので、出力音データDoに基づく音の音質が向上される。
A-5: Summary of the first embodiment As explained above, in the first embodiment, the acoustic server 10 generates the output sound data Do by performing acoustic processing on the input sound data Di, and generates the output sound data Do. Do is transmitted to the in-vehicle audio device 20A. Therefore, it is not necessary to arrange a control device for performing sound processing in the in-vehicle audio device 20A. Therefore, the configuration of the vehicle-mounted audio device 20A is simplified, and as a result, the cost of the vehicle-mounted audio device 20A is reduced. Furthermore, the acoustic server 10 determines parameters used for acoustic processing on the input sound data Di based on at least one of first information regarding the attributes of the input sound data Di and second information regarding the in-vehicle audio device 20A. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data Do is improved.
 また、音響サーバ10は、第2情報として、車載音響装置20Aの音響特性を示す情報を取得する。よって、音響処理のパラメータに車載音響装置20Aの音響特性が反映されるので、音響サーバ10は、車載音響装置20Aで出力音データDoを利用するのに適した音響処理を行うことができる。 Additionally, the acoustic server 10 acquires information indicating the acoustic characteristics of the in-vehicle audio device 20A as second information. Therefore, since the acoustic characteristics of the in-vehicle audio device 20A are reflected in the acoustic processing parameters, the audio server 10 can perform acoustic processing suitable for using the output sound data Do in the in-vehicle audio device 20A.
 また、音響サーバ10は、第2情報として、車載音響装置20Aの環境情報を取得する。よって、音響処理のパラメータに車載音響装置20Aの周囲音が反映されるので、音響サーバ10は、周囲音が発生している環境で出力音データDoを利用するのに適した音響処理を行うことができる。 Additionally, the acoustic server 10 acquires environmental information of the in-vehicle audio device 20A as second information. Therefore, since the ambient sound of the in-vehicle audio device 20A is reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where ambient sounds are generated. Can be done.
 また、音響サーバ10は、出力音データDoの送信中、環境情報を継続して取得し、環境情報が変化した場合、パラメータを再決定する。よって、周囲音の変化が音響処理のパラメータに反映されるので、音響サーバ10は、周囲音が刻々と変化する環境下で出力音データDoを利用するのに適した音響処理を行うことができる。 Furthermore, the acoustic server 10 continuously acquires environmental information while transmitting the output sound data Do, and re-determines the parameters when the environmental information changes. Therefore, since changes in the ambient sound are reflected in the acoustic processing parameters, the acoustic server 10 can perform acoustic processing suitable for using the output sound data Do in an environment where the ambient sound changes from moment to moment. .
 また、音響サーバ10は、入力音データDiのフォーマット変換処理の要否、およびフォーマット変換処理が要の場合の変換先フォーマットを決定する。よって、ユーザは、入力音データDiのフォーマットを意識することなく車載音響装置20Aで利用できる。 The sound server 10 also determines whether or not the input sound data Di needs to be converted into a format, and if so, what format to convert it to. This allows the user to use the input sound data Di in the in-vehicle sound device 20A without being aware of the format of the data.
 また、音響サーバ10は、音響処理のパラメータの変更をユーザから受け付ける。よって、音響サーバ10は、ユーザの好み、または第1情報または第2情報には反映されない状況を反映して音響処理を行うことができる。 Additionally, the audio server 10 accepts changes to audio processing parameters from the user. Therefore, the audio server 10 can perform audio processing that reflects the user's preferences or situations that are not reflected in the first information or the second information.
 また、音響サーバ10は、ネットワークNを介して音データを配信する配信サーバ30から入力音データDiを取得する。よって、ユーザは、車載音響装置20Aのローカル音データDsl以外の様々な音データを車載音響装置20Aで利用することができ、ユーザの利便性を向上させることができる。 Additionally, the audio server 10 obtains input sound data Di from the distribution server 30 that distributes sound data via the network N. Therefore, the user can use various sound data other than the local sound data Dsl of the in-vehicle audio device 20A on the in-vehicle audio device 20A, and the user's convenience can be improved.
 また、音響サーバ10は、車載音響装置20Aから入力音データDiを取得する。よって、音響サーバ10は、車載音響装置20Aから取得した音データに対して音響処理を行うことができるので、車載音響装置20Aで音響処理を行う場合と比較して、車載音響装置20Aの処理負荷が軽減される。 Additionally, the audio server 10 acquires input sound data Di from the in-vehicle audio device 20A. Therefore, since the audio server 10 can perform acoustic processing on the sound data acquired from the in-vehicle audio device 20A, the processing load on the in-vehicle audio device 20A is reduced compared to the case where the in-vehicle audio device 20A performs audio processing. is reduced.
 また、音響サーバ10は、入力音データDiとして車載音響装置20Aに記憶された音データおよび車載音響装置20Aに接続された機器が出力する音データの少なくともいずれかを取得する。音響サーバ10は、車載音響装置20Aに記憶された音データおよび車載音響装置20Aに接続された機器から出力される音データの少なくともいずれかに対して音響処理を行う。よって、音響サーバ10は、ユーザの利便性を向上させることができる。 Furthermore, the audio server 10 acquires, as input sound data Di, at least one of the sound data stored in the in-vehicle audio device 20A and the sound data output from the equipment connected to the in-vehicle audio device 20A. The audio server 10 performs audio processing on at least one of the sound data stored in the vehicle-mounted audio device 20A and the sound data output from the equipment connected to the vehicle-mounted audio device 20A. Therefore, the audio server 10 can improve user convenience.
 また、音響サーバ10は、車両Cの車室内に音を出力する車載音響装置20Aで利用される出力音データDoを生成する。よって、音響サーバ10は、建物内等と異なり、音の聴取環境が整っていない車両Cの車室内に出力される音の音質を向上させることができる。 Additionally, the audio server 10 generates output sound data Do used by the in-vehicle audio device 20A that outputs sound into the cabin of the vehicle C. Therefore, the acoustic server 10 can improve the sound quality of the sound output into the cabin of the vehicle C, which has a poor sound listening environment unlike inside a building.
 B:第2実施形態
 本開示の第2実施形態を説明する。なお、以下に例示する各形態において機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。第2実施形態におけるシステム構成は、第1実施形態と同様であるため、説明を省略する。
B: Second Embodiment A second embodiment of the present disclosure will be described. In addition, in each of the embodiments illustrated below, for elements whose functions are similar to those in the first embodiment, the reference numerals used in the description of the first embodiment will be used, and the detailed description of each will be omitted as appropriate. The system configuration in the second embodiment is the same as that in the first embodiment, so a description thereof will be omitted.
 図8は、第2実施形態における音響サーバ10の構成を示すブロック図である。第2実施形態において、音響サーバ10の制御装置103は、第1実施形態における機能的構成に加えて、車両音生成部117として機能する。車両音生成部117は、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報の少なくとも一方に基づいて、車載音響装置20Aから出力する音を示す車両音データを生成する。また、第1送信制御部は、ネットワークNを介して、車両音データを車載音響装置20Aへ送信する。本実施形態では、車両音データは、例えば[1]仮想のエンジン音を示すエンジン音データDse、または[2]周囲の障害物等を報知する警報音データDsk、を含む。 FIG. 8 is a block diagram showing the configuration of the acoustic server 10 in the second embodiment. In the second embodiment, the control device 103 of the acoustic server 10 functions as a vehicle sound generation section 117 in addition to the functional configuration in the first embodiment. The vehicle sound generation unit 117 generates vehicle sound data indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Further, the first transmission control unit transmits the vehicle sound data to the in-vehicle audio device 20A via the network N. In this embodiment, the vehicle sound data includes, for example, [1] engine sound data Dse indicating a virtual engine sound, or [2] alarm sound data Dsk notifying surrounding obstacles or the like.
[1]エンジン音データDse
 例えば車両Cが電気モータを動力源とする電気自動車である場合に、車両Cに搭乗するユーザに走行感を喚起させるため、仮想のエンジン音を出力する場合がある。車両音生成部117は、車載音響装置20Aから出力するエンジン音に対応するエンジン音データDseを生成する。
[1] Engine sound data Dse
For example, when the vehicle C is an electric vehicle powered by an electric motor, a virtual engine sound may be output in order to evoke a sense of driving in the user riding the vehicle C. The vehicle sound generation unit 117 generates engine sound data Dse corresponding to the engine sound output from the vehicle-mounted audio device 20A.
 より詳細には、第2取得部112は、第2情報として、車両Cの走行速度情報およびアクセル開度情報を車載音響装置20Aから取得する。走行速度情報は、車両Cの走行状態を示す情報の一例である。アクセル開度情報は、車両Cに対する操作状態を示す情報の一例である。 More specifically, the second acquisition unit 112 acquires the traveling speed information and accelerator opening information of the vehicle C from the in-vehicle audio device 20A as the second information. The traveling speed information is an example of information indicating the traveling state of the vehicle C. The accelerator opening degree information is an example of information indicating the operating state of the vehicle C.
 図8に示すように、音響サーバ10の記憶装置102には、エンジン音データDseが記憶されている。エンジン音データDseは、複数のエンジン音データDse_1~Dse_25(図9参照)を含む。車両音生成部117は、複数のエンジン音データDse_1~Dse_25の中から1のエンジン音データDseを選択する。第1送信制御部115は、選択されたエンジン音データDseを車両音データとして車載音響装置20Aに送信する。 As shown in FIG. 8, engine sound data Dse is stored in the storage device 102 of the acoustic server 10. The engine sound data Dse includes a plurality of engine sound data Dse_1 to Dse_25 (see FIG. 9). The vehicle sound generation unit 117 selects one engine sound data Dse from among the plurality of engine sound data Dse_1 to Dse_25. The first transmission control unit 115 transmits the selected engine sound data Dse to the in-vehicle audio device 20A as vehicle sound data.
 より詳細には、車両音生成部117は、車両Cの走行速度情報に基づいて、車両Cの仮想的なエンジンの回転数(以下「仮想エンジン回転数」という)を決定する。車両音生成部117は、例えば車両Cの走行速度と、仮想エンジン回転数との対応関係を示す参照情報(図示なし)に基づいて、仮想エンジン回転数を決定する。 More specifically, the vehicle sound generation unit 117 determines the virtual engine rotation speed of the vehicle C (hereinafter referred to as "virtual engine rotation speed") based on the traveling speed information of the vehicle C. The vehicle sound generation unit 117 determines the virtual engine rotation speed based on reference information (not shown) indicating the correspondence between the traveling speed of the vehicle C and the virtual engine rotation speed, for example.
 車両音生成部117は、仮想エンジン回転数とアクセル開度情報とに基づいて、複数のエンジン音データDse_1~Dse_25から1のエンジン音データDseを選択する。図9は、仮想エンジン回転数とアクセル開度情報とからエンジン音データDseを選択ためのマップFを模式的に示す図である。マップFは、例えば記憶装置102に記憶されている。図9では、エンジン音データDseの数が25個である場合を示すが、エンジン音データDseの数は25個に限らない。車両音生成部117は、マップFにおいて、車両Cの仮想エンジン回転数とアクセル開度情報とが交わる領域に対応する1のエンジン音データDseを特定する。車両音生成部117は、1のエンジン音データDseを記憶装置102から読み出す。第1送信制御部115は、車両音生成部117により読み出された1のエンジン音データDseを車両音データとして車載音響装置20Aへ送信する。 The vehicle sound generation unit 117 selects one engine sound data Dse from the plurality of engine sound data Dse_1 to Dse_25 based on the virtual engine rotation speed and accelerator opening information. FIG. 9 is a diagram schematically showing a map F for selecting engine sound data Dse from virtual engine speed and accelerator opening information. Map F is stored in the storage device 102, for example. Although FIG. 9 shows a case where the number of engine sound data Dse is 25, the number of engine sound data Dse is not limited to 25. Vehicle sound generation unit 117 identifies one piece of engine sound data Dse in map F that corresponds to a region where the virtual engine speed of vehicle C and the accelerator opening information intersect. Vehicle sound generation section 117 reads out one piece of engine sound data Dse from storage device 102 . The first transmission control unit 115 transmits one piece of engine sound data Dse read out by the vehicle sound generation unit 117 to the vehicle-mounted audio device 20A as vehicle sound data.
[2]警報音データDsk
 第1実施形態では、車両Cの走行に伴う各種の警報音は、車両ECU50から出力されるシステム音データDssに含まれていた。第2実施形態では、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報の少なくとも一方に基づいて、車両音生成部117が警報音データDskを生成する。図8に示すように、音響サーバ10の記憶装置102には、警報音データDskが記憶されている。警報音データDskは、複数の警報音データDskを含む。車両音生成部117は、複数の警報音データDskの中から1の警報音データDskを選択する。第1送信制御部115は、選択された警報音データDskを車両音データとして車載音響装置20Aに送信する。
[2] Alarm sound data Dsk
In the first embodiment, various alarm sounds accompanying the travel of the vehicle C were included in the system sound data Dss output from the vehicle ECU 50. In the second embodiment, the vehicle sound generation unit 117 generates the alarm sound data Dsk based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. As shown in FIG. 8, alarm sound data Dsk is stored in the storage device 102 of the acoustic server 10. The alarm sound data Dsk includes a plurality of alarm sound data Dsk. The vehicle sound generation unit 117 selects one alarm sound data Dsk from among the plurality of alarm sound data Dsk. The first transmission control unit 115 transmits the selected alarm sound data Dsk to the vehicle-mounted audio device 20A as vehicle sound data.
 具体的には、例えば第2取得部112は、車両Cに対する操作状態を示す情報として、シフトレバーの操作状態を示す情報を取得する。車両音生成部117は、シフトレバーがリバース(R)に操作されている場合に、警報音データDskの中から、車両Cが後退していることを示す警報音データDskを選択し、車両音データとして車載音響装置20Aに送信する。また、例えば第2取得部112は、車両Cに対する走行状態を示す情報として、車両Cの走行速度情報を取得する。車両音生成部117は、車両Cの走行速度が制限速度を超過している場合に、複数の警報音データDskの中から、速度超過していることを示す警報音データDskを選択し、車両音データとして車載音響装置20Aに送信する。 Specifically, for example, the second acquisition unit 112 acquires information indicating the operating state of the shift lever as information indicating the operating state of the vehicle C. When the shift lever is operated in reverse (R), the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is moving backward from the alarm sound data Dsk, and generates the vehicle sound. It is transmitted as data to the in-vehicle audio device 20A. Further, for example, the second acquisition unit 112 acquires traveling speed information of the vehicle C as information indicating the traveling state of the vehicle C. When the traveling speed of the vehicle C exceeds the speed limit, the vehicle sound generation unit 117 selects alarm sound data Dsk indicating that the vehicle C is overspeeding from among the plurality of alarm sound data Dsk, and It is transmitted as sound data to the in-vehicle audio device 20A.
 なお、第1実施形態にも示したように、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報の少なくとも一方を、入力音データDiの音響処理に用いてもよい。例えば、車両Cに対する操作状態を示す情報としてシフトレバーの操作位置を示す情報を取得し、シフトレバーがリバース(R)に操作されているものとする。この場合、例えば車庫入れ等の慎重な運転が要求される場面であることが予測されるので、システム音以外の音データの音量が小さくされてもよい。また、車両Cに対する走行状態を示す情報として、車両Cの走行速度情報を取得し、車両Cは加速中であるものとする。この場合、車両Cの走行音が大きくなるのに対応して、スピーカ230の音量が大きくされてもよい。 Note that, as shown in the first embodiment, at least one of the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C may be used for acoustic processing of the input sound data Di. For example, assume that information indicating the operating position of the shift lever is acquired as information indicating the operating state of vehicle C, and that the shift lever is operated in reverse (R). In this case, it is predicted that the situation requires careful driving, such as parking in a garage, so the volume of sound data other than the system sound may be reduced. Furthermore, it is assumed that the running speed information of the vehicle C is acquired as information indicating the running state of the vehicle C, and that the vehicle C is accelerating. In this case, the volume of the speaker 230 may be increased in response to the increase in the running sound of the vehicle C.
 また、警報音データDskの生成に関して、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報は、車載音響装置20Aから取得するに限らない。例えば道路上に配置された監視カメラの画像、または他の車両Cに搭載されたカメラの画像から、車両Cに対する操作状態または車両Cの走行状態を検出してもよい。 Furthermore, regarding the generation of the alarm sound data Dsk, the information indicating the operating state of the vehicle C and the information indicating the traveling state of the vehicle C are not limited to being acquired from the in-vehicle audio device 20A. For example, the operation state of the vehicle C or the running state of the vehicle C may be detected from an image of a surveillance camera placed on a road or an image of a camera mounted on another vehicle C.
 以上説明したように、第2実施形態では、音響サーバ10は、第2情報として、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報の少なくとも一方を取得する。よって、音響サーバ10は、車両Cに対する操作状態、および車両Cの走行状態の少なくとも一方を入力音データDiの音響処理に反映させることができる。 As explained above, in the second embodiment, the acoustic server 10 acquires at least one of the information indicating the operation state of the vehicle C and the information indicating the running state of the vehicle C as the second information. Therefore, the audio server 10 can reflect at least one of the operating state of the vehicle C and the running state of the vehicle C in the audio processing of the input sound data Di.
 また、第2実施形態では、音響サーバ10は、車両Cに対する操作状態を示す情報、および車両Cの走行状態を示す情報の少なくとも一方に基づいて、車載音響装置20Aから出力する音を示す車両音データを生成する。よって、音響サーバ10は、車両ECU50で車両音データを生成する場合と比較して、車両ECU50の処理負荷を軽減させることができる。 Further, in the second embodiment, the audio server 10 generates a vehicle sound indicating the sound output from the in-vehicle audio device 20A based on at least one of information indicating the operating state of the vehicle C and information indicating the traveling state of the vehicle C. Generate data. Therefore, the acoustic server 10 can reduce the processing load on the vehicle ECU 50 compared to the case where the vehicle ECU 50 generates vehicle sound data.
 C:第3実施形態
 本開示の第3実施形態を説明する。なお、以下に例示する各形態において機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。第3実施形態におけるシステム構成は、第1実施形態と同様であるため、説明を省略する。
C: Third Embodiment A third embodiment of the present disclosure will be described. In addition, in each of the embodiments illustrated below, for elements whose functions are similar to those in the first embodiment, the reference numerals used in the description of the first embodiment will be used, and the detailed description of each will be omitted as appropriate. The system configuration in the third embodiment is the same as that in the first embodiment, so a description thereof will be omitted.
 図10は、第3実施形態における車載音響装置20Aの構成を示すブロック図である。第3実施形態において、車載音響装置20Aの制御装置216は、設定受付部252、受信制御部254、および出力制御部255として機能する。言い換えると、第3実施形態において、車載音響装置20Aの制御装置216は、車両情報送信部251および第2送信制御部253としては機能しない。すなわち、第3実施形態では、車載音響装置20Aから音響サーバ10へのローカル音データDslの送信は行わない。また、第3実施形態では、車載音響装置20Aから音響サーバ10への音響特性情報および環境情報の送信は行わない。 FIG. 10 is a block diagram showing the configuration of the in-vehicle sound device 20A in the third embodiment. In the third embodiment, the control device 216 of the in-vehicle sound device 20A functions as a setting acceptance unit 252, a reception control unit 254, and an output control unit 255. In other words, in the third embodiment, the control device 216 of the in-vehicle sound device 20A does not function as a vehicle information transmission unit 251 and a second transmission control unit 253. That is, in the third embodiment, the local sound data Dsl is not transmitted from the in-vehicle sound device 20A to the sound server 10. Also, in the third embodiment, the acoustic characteristic information and environmental information are not transmitted from the in-vehicle sound device 20A to the sound server 10.
 また、第3実施形態において、車載音響装置20Aは、制御装置(主制御装置)216とアンプ220との間に音響制御装置240を備える。音響制御装置240は、ヘッドユニット200の制御装置216と比較して性能が低いプロセッサで構成される。音響制御装置240は、出力音データDoに基づく音と、ローカル音データDslに基づく音とのバランスを調整する。ローカル音データDslに基づく音は、車載音響装置20Aに記憶された音データに基づく音および車載音響装置20Aに接続された機器が出力する音データに基づく音の少なくともいずれかである。 In the third embodiment, the in-vehicle audio device 20A includes an audio control device 240 between the control device (main control device) 216 and the amplifier 220. The sound control device 240 is configured with a processor having lower performance than the control device 216 of the head unit 200. The sound control device 240 adjusts the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl. The sound based on the local sound data Dsl is at least one of the sound based on the sound data stored in the vehicle-mounted audio device 20A and the sound based on the sound data output by the device connected to the vehicle-mounted audio device 20A.
 第3実施形態では、設定受付部252は、ユーザが配信音データDsnの利用を希望する場合に、配信サーバ30が配信する複数の配信音データDsnの中から、所望の配信音データDsnの選択を受け付ける。また、設定受付部252は、選択された配信音データDsnを特定する情報Mを音響サーバ10に送信する。 In the third embodiment, when the user desires to use the distributed sound data Dsn, the setting reception unit 252 selects desired distributed sound data Dsn from among the plurality of distributed sound data Dsn distributed by the distribution server 30. accept. Further, the setting reception unit 252 transmits information M specifying the selected distribution sound data Dsn to the audio server 10.
 受信制御部254は、ネットワークNを介して音響サーバ10から出力音データDoを受信する。第3実施形態では、出力音データDoは、配信音データDsnに対して音響処理が施されたデータである。第3実施形態では、音響サーバ10の出力音生成部114は、入力音データDiに対してフォーマット変換処理を施して出力音データDoを生成する。より詳細には、音響サーバ10の出力音生成部114は、配信音データDsnである入力音データDiを、音響制御装置240で再生可能なフォーマットに変換することにより出力音データDoを生成する。 The reception control unit 254 receives the output sound data Do from the acoustic server 10 via the network N. In the third embodiment, the output sound data Do is data obtained by performing acoustic processing on the distribution sound data Dsn. In the third embodiment, the output sound generation unit 114 of the audio server 10 performs format conversion processing on the input sound data Di to generate output sound data Do. More specifically, the output sound generation unit 114 of the audio server 10 generates the output sound data Do by converting the input sound data Di, which is the distributed sound data Dsn, into a format that can be played by the audio control device 240.
 出力制御部255は、受信制御部254が受信した出力音データDoを音響制御装置240に出力する。音響制御装置240は、出力音データDoを再生してアンプ220に出力する。アンプ220は、出力音データDoを増幅し、スピーカ230に出力する。スピーカ230は、出力音データDoに基づく音を出力する。 The output control unit 255 outputs the output sound data Do received by the reception control unit 254 to the audio control device 240. The acoustic control device 240 reproduces the output sound data Do and outputs it to the amplifier 220. Amplifier 220 amplifies output sound data Do and outputs it to speaker 230. The speaker 230 outputs sound based on the output sound data Do.
 また、出力制御部255は、ローカル音データDslを音響制御装置240に出力する。音響制御装置240は、出力音データDoに基づく音とローカル音データDslに基づく音とのバランスを調整するよう各音データを処理して、アンプ220に出力する。具体的には、音響制御装置240は、例えば出力音データDoに基づく再生音を出力中に、ナビゲーション装置52の案内音声(案内音声データDsaに基づく案内音声)を出力する場合、再生音の音量を小さくした上で案内音声を出力する。 Additionally, the output control unit 255 outputs the local sound data Dsl to the audio control device 240. The sound control device 240 processes each sound data so as to adjust the balance between the sound based on the output sound data Do and the sound based on the local sound data Dsl, and outputs the processed sound data to the amplifier 220. Specifically, when outputting the guidance sound of the navigation device 52 (guidance sound based on the guidance sound data Dsa) while outputting the playback sound based on the output sound data Do, the sound control device 240 adjusts the volume of the playback sound. The guidance voice is output after making it smaller.
 以上説明したように、第3実施形態では、ローカル音データDsl、音響特性情報および環境情報は、音響サーバ10に送信されない。よって、第3実施形態は、第1実施形態と比較して音響サーバ10との間の通信量が少なく、通信環境が整っていない環境にも適用可能である。また、第3実施形態では、車載音響装置20Aは、ローカル音データDslに基づく音と、出力音データDoに基づく音とのバランスを調整する音響制御装置240を備える。音響サーバ10は、入力音データDiを音響制御装置240で再生可能なフォーマットに変換することにより、出力音データDoを生成する。よって、ユーザは、音響制御装置240で再生可能なフォーマットを意識することなく配信音データDsnを利用することができる。また、例えば配信サーバ30毎の専用アプリケーションのインストール、および専用アプリケーションのバージョンアップ等の作業が不要となるので、ユーザの利便性を向上させることができる。 As explained above, in the third embodiment, the local sound data Dsl, acoustic characteristic information, and environment information are not transmitted to the acoustic server 10. Therefore, the third embodiment has a smaller amount of communication with the acoustic server 10 than the first embodiment, and can be applied to environments where the communication environment is not well established. In the third embodiment, the in-vehicle audio device 20A includes a sound control device 240 that adjusts the balance between the sound based on the local sound data Dsl and the sound based on the output sound data Do. The audio server 10 generates output sound data Do by converting the input sound data Di into a format that can be played by the audio control device 240. Therefore, the user can use the distributed sound data Dsn without being aware of the format that can be played by the audio control device 240. Further, since it is not necessary to install a dedicated application for each distribution server 30 and to update the version of the dedicated application, for example, user convenience can be improved.
 D:第4実施形態
 本開示の第4実施形態を説明する。なお、以下に例示する各形態において機能が第1実施形態と同様である要素については、第1実施形態の説明で使用した符号を流用して各々の詳細な説明を適宜に省略する。
D: Fourth Embodiment A fourth embodiment of the present disclosure will be described. In addition, in each of the embodiments illustrated below, for elements whose functions are similar to those in the first embodiment, the reference numerals used in the description of the first embodiment will be used, and the detailed description of each will be omitted as appropriate.
 図11は、第4実施形態にかかる情報処理システム2の構成を例示する図である。情報処理システム2は、情報処理システム1と同様に複数の車載音響装置20(20A~20N)を含む。情報処理システム2は、音響サーバ10と、車載音響装置20Aと、スマートフォン40とを含む。第4実施形態において、音響サーバ10は、複数の車載音響装置20A~20Nの各々において用いられる複数の出力音データDoに関する処理を行う情報処理装置の一例である。 FIG. 11 is a diagram illustrating the configuration of the information processing system 2 according to the fourth embodiment. Similar to the information processing system 1, the information processing system 2 includes a plurality of in-vehicle audio devices 20 (20A to 20N). Information processing system 2 includes an audio server 10, an in-vehicle audio device 20A, and a smartphone 40. In the fourth embodiment, the audio server 10 is an example of an information processing device that performs processing regarding a plurality of output sound data Do used in each of the plurality of vehicle-mounted audio devices 20A to 20N.
 上述した第1実施形態~第3実施形態では、音響サーバ10が音響処理に用いられるパラメータを決定するとともに、入力音データDiに音響処理を行って出力音データDoを生成した。これに対して、第4実施形態では、音響サーバ10が音響処理に用いられるパラメータを決定するとともに、音響処理を行う音響処理装置を決定する。本実施形態において、音響処理装置となり得るのは、例えば、音響サーバ10、スマートフォン40および車載音響装置20Aの少なくともいずれかである。以下、音響処理装置の候補である音響サーバ10、スマートフォン40および車載音響装置20Aを、「候補装置」という。 In the first to third embodiments described above, the audio server 10 determines the parameters used for audio processing, and performs audio processing on the input sound data Di to generate the output sound data Do. On the other hand, in the fourth embodiment, the audio server 10 determines the parameters used for audio processing, and also determines the audio processing device that performs the audio processing. In this embodiment, the sound processing device can be, for example, at least one of the sound server 10, the smartphone 40, and the vehicle-mounted sound device 20A. Hereinafter, the acoustic server 10, smartphone 40, and vehicle-mounted audio device 20A that are candidates for the acoustic processing device will be referred to as "candidate devices."
 本実施形態では、スマートフォン40および車載音響装置20Aは、上述した音響処理のうち、少なくとも一部の処理を実行可能であるものとする。具体的には、スマートフォン40の制御装置および車載音響装置20Aの制御装置216には、少なくとも一部の音響処理を行うためのプログラムがインストールされており、音響サーバ10から送信されるパラメータを用いて音響処理を実行することが可能である。 In this embodiment, it is assumed that the smartphone 40 and the in-vehicle audio device 20A are capable of performing at least some of the above-described audio processing. Specifically, the control device of the smartphone 40 and the control device 216 of the in-vehicle audio device 20A have a program installed therein for performing at least some of the audio processing, and use parameters transmitted from the audio server 10 to process the sound. It is possible to perform acoustic processing.
 スマートフォン40は、車両Cおよび車載音響装置20Aを利用するユーザによって携帯される電子機器である。スマートフォン40は、音響サーバ10と異なる他の装置の一例である。スマートフォン40は、車両C内において、車載音響装置20Aとの間で例えばBluetooth(登録商標)等の近距離無線を用いて通信する。スマートフォン40には、例えば音楽配信アプリケーションがインストールされており、配信サーバ30(図2等参照)から配信音データDsnを取得可能である。スマートフォン40が取得した配信音データDsnは、例えば車載音響装置20Aに送信され、車載音響装置20Aのスピーカから出力される。 The smartphone 40 is an electronic device carried by a user who uses the vehicle C and the in-vehicle audio device 20A. The smartphone 40 is an example of another device different from the audio server 10. The smartphone 40 communicates with the in-vehicle audio device 20A in the vehicle C using short-range wireless communication such as Bluetooth (registered trademark). For example, a music distribution application is installed on the smartphone 40, and the distribution sound data Dsn can be acquired from the distribution server 30 (see FIG. 2, etc.). The distributed sound data Dsn acquired by the smartphone 40 is transmitted, for example, to the in-vehicle audio device 20A, and output from the speaker of the in-vehicle audio device 20A.
 図12は、第4実施形態における音響サーバ10の構成を示すブロック図である。第4実施形態において、音響サーバ10の制御装置103は、第1取得部111、第2取得部112、パラメータ決定部113、出力音生成部114および第1送信制御部115に加えて、装置決定部118および第3送信制御部119として機能する。 FIG. 12 is a block diagram showing the configuration of the acoustic server 10 in the fourth embodiment. In the fourth embodiment, the control device 103 of the acoustic server 10 includes a first acquisition section 111, a second acquisition section 112, a parameter determination section 113, an output sound generation section 114, and a first transmission control section 115. 118 and a third transmission control section 119.
 上記構成のうち、第2取得部112およびパラメータ決定部113は、第1実施形態と同様に機能する。第2取得部112は、入力音データDiの属性に関する第1情報、および、車載音響装置20Aに関する第2情報の少なくとも一方を取得する。第2取得部112は、情報取得部の一例である。パラメータ決定部113は、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに音響効果を付与する音響処理に用いられるパラメータを決定する。 Of the above configuration, the second acquisition unit 112 and the parameter determination unit 113 function in the same manner as in the first embodiment. The second acquisition unit 112 acquires at least one of first information regarding the attribute of the input sound data Di and second information regarding the in-vehicle audio device 20A. The second acquisition unit 112 is an example of an information acquisition unit. The parameter determination unit 113 determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data Di, based on at least one of the first information and the second information.
 装置決定部118は、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに音響処理を行う音響処理装置を決定する。装置決定部118は、例えば第1情報の一例として、配信音データDsnの利用規約(音響サーバ10における入力音データDiの取得可否)を取得してもよい。配信音データDsnを配信する配信サービスの利用規約において、配信音データDsnを取得できるのは、配信サービスに登録したユーザが利用する装置(例えばスマートフォン40または車載音響装置20A)のみと定められている場合がある。この場合、音響サーバ10は配信音データDsnを取得できず、音響処理装置となることはできない。装置決定部118は、スマートフォン40または車載音響装置20Aを音響処理装置に決定する。 The device determining unit 118 determines a sound processing device that performs sound processing on the input sound data Di, based on at least one of the first information and the second information. For example, the device determining unit 118 may obtain, as an example of the first information, the terms of use of the distributed sound data Dsn (whether or not the audio server 10 can obtain the input sound data Di). In the terms of use of a distribution service that distributes distributed sound data Dsn, it is stipulated that only devices used by users who have registered with the distribution service (for example, smartphone 40 or in-vehicle audio device 20A) can acquire distributed sound data Dsn. There are cases. In this case, the audio server 10 cannot acquire the distributed sound data Dsn and cannot function as an audio processing device. The device determining unit 118 determines the smartphone 40 or the in-vehicle audio device 20A as the sound processing device.
 また、装置決定部118は、例えば第2情報として、車載音響装置20Aと音響サーバ10との間の通信状況を示す情報を取得してもよい。車載音響装置20Aと音響サーバ10との間の通信状態が悪い場合、音響処理後の出力音データDoの送信に遅延が生じる可能性がある。よって、装置決定部118は、車載音響装置20Aまたはスマートフォン40で音響処理を行うと決定する。 Additionally, the device determining unit 118 may obtain, for example, information indicating the communication status between the in-vehicle audio device 20A and the audio server 10 as the second information. If the communication condition between the in-vehicle audio device 20A and the audio server 10 is poor, there is a possibility that a delay will occur in the transmission of the output sound data Do after the audio processing. Therefore, the device determining unit 118 determines that the in-vehicle audio device 20A or the smartphone 40 performs the audio processing.
 また、装置決定部118は、例えば、音響サーバ10、スマートフォン40および車載音響装置20Aに関する情報(以下「候補装置情報」という)に基づいて、音響処理装置を決定してもよい。候補装置情報は、例えば、候補装置の性能に関する情報である。具体的には、例えば候補装置もしくは候補装置に用いられる部材(制御装置および記録装置等)の品番(型番)等である。装置決定部118は、音響処理を実行可能なデータ処理能力を候補装置が有する場合に、当該候補装置で音響処理を行うと決定する。候補装置の処理能力が低い場合、負荷が高い音響処理を行わせると遅延等が生じる可能性がある。候補装置の性能に関する情報を得ることにより、候補装置に行わせる音響処理の負荷を適切に設定できる。 Furthermore, the device determining unit 118 may determine the sound processing device based on, for example, information regarding the audio server 10, the smartphone 40, and the in-vehicle audio device 20A (hereinafter referred to as "candidate device information"). The candidate device information is, for example, information regarding the performance of the candidate device. Specifically, it is, for example, the product number (model number) of the candidate device or the components used in the candidate device (control device, recording device, etc.). The device determining unit 118 determines that the candidate device performs the audio processing when the candidate device has data processing capability capable of performing the audio processing. If the candidate device has low processing capacity, delays may occur if the candidate device performs audio processing that requires a high load. By obtaining information regarding the performance of the candidate device, it is possible to appropriately set the acoustic processing load to be performed by the candidate device.
 また、候補装置情報は、例えば、候補装置のリアルタイムな稼働状況(処理負荷)であってもよい。候補装置が、音響処理以外の負荷の高い処理を行っている場合、更に音響処理が課されると、処理の遅延が生じる可能性がある。よって、装置決定部118は、候補装置のうち、現状における処理負荷が最も小さい候補装置を、音響処理装置に決定してもよい。 Further, the candidate device information may be, for example, the real-time operating status (processing load) of the candidate device. If the candidate device is performing processing with a high load other than audio processing, if audio processing is further imposed, there is a possibility that processing will be delayed. Therefore, the device determining unit 118 may determine, among the candidate devices, the candidate device with the smallest current processing load as the audio processing device.
 つぎに、音響処理装置が決定された後の処理について説明する。以下、音響処理装置が音響サーバ10の場合、音響処理装置が音響サーバ10以外の装置の場合、および、音響処理装置が複数の装置の場合について説明する。 Next, the processing after the sound processing device is determined will be explained. Hereinafter, a case where the sound processing device is the sound server 10, a case where the sound processing device is a device other than the sound server 10, and a case where the sound processing device is a plurality of devices will be described.
[音響処理装置が音響サーバ10の場合]
 装置決定部118によって音響サーバ10が音響処理装置に決定された場合、第1取得部111は、入力音データDiを取得する。出力音生成部114は、パラメータ決定部113が決定したパラメータを用いて入力音データDiに音響処理を施すことによって車載音響装置20Aで用いられる出力音データDoを生成する。第1送信制御部115は、ネットワークNを介して、出力音データDoを車載音響装置20Aへ送信する。
[When the sound processing device is the sound server 10]
When the device determining unit 118 determines that the audio server 10 is the audio processing device, the first acquiring unit 111 acquires the input sound data Di. The output sound generation unit 114 performs acoustic processing on the input sound data Di using the parameters determined by the parameter determination unit 113, thereby generating output sound data Do used in the vehicle-mounted audio device 20A. The first transmission control unit 115 transmits the output sound data Do to the in-vehicle audio device 20A via the network N.
[音響処理装置が音響サーバ10以外の装置の場合]
 装置決定部118により、音響サーバ10以外の他の装置(スマートフォン40または車載音響装置20A)が音響処理装置に決定された場合、第3送信制御部119は、パラメータ決定部113によって決定されたパラメータを他の装置に送信する。第3送信制御部119は、パラメータ送信制御部の一例である。なお、パラメータとともに、入力音データDiが他の装置に送信されてもよい。他の装置は、パラメータを用いて入力音データDiに音響処理を施すことによって出力音データDoを生成する。スマートフォン40が音響処理装置の場合は、スマートフォン40から車載音響装置20Aに出力音データDoが送信される。
[When the sound processing device is a device other than the sound server 10]
When the device determining unit 118 determines that a device other than the audio server 10 (smartphone 40 or in-vehicle audio device 20A) is the audio processing device, the third transmission control unit 119 uses the parameters determined by the parameter determining unit 113. to other devices. The third transmission control section 119 is an example of a parameter transmission control section. Note that the input sound data Di may be transmitted to another device together with the parameters. Other devices generate output sound data Do by subjecting input sound data Di to acoustic processing using parameters. When the smartphone 40 is a sound processing device, the output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A.
[音響処理装置が複数の装置の場合]
 音響処理が複数の処理(ステップ)を含む場合、一連の処理を複数の装置で分担してもよい。これにより、特定の装置に対する処理負荷の集中を回避することができる。例えば、複数の処理の一部を音響サーバ10が行い、残りの処理を他の装置(例えばスマートフォン40)が行うようにしてもよい。例えば、音響処理が第1処理と第2処理とを含む場合、パラメータ決定部113は、第1処理で用いられる第1パラメータと、第2処理で用いられる第2パラメータとを決定する。装置決定部118は、第1処理を音響サーバ10が実行すると決定し、第2処理をスマートフォン40が実行すると決定したものとする。この場合、スマートフォン40は、音響サーバ10以外の他の装置の一例である。出力音生成部114は、第1パラメータを用いて入力音データDiに第1処理を施して一部処理済みデータを生成する。第1送信制御部115は、スマートフォン40に一部処理済みデータと、第2パラメータとを送信する。スマートフォン40は、第2パラメータを用いて、一部処理済みデータに対して第2処理を行い、出力音データDoを生成する。出力音データDoは、スマートフォン40から車載音響装置20Aに送信され、車載音響装置20Aのスピーカ230から出力される。
[If there are multiple sound processing devices]
When the acoustic processing includes a plurality of processes (steps), a series of processes may be shared among a plurality of devices. This makes it possible to avoid concentration of processing load on a specific device. For example, the audio server 10 may perform some of the plurality of processes, and the remaining processes may be performed by another device (for example, the smartphone 40). For example, when the acoustic processing includes a first process and a second process, the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process. It is assumed that the device determining unit 118 determines that the acoustic server 10 executes the first process and that the smartphone 40 executes the second process. In this case, the smartphone 40 is an example of a device other than the audio server 10. The output sound generation unit 114 performs first processing on the input sound data Di using the first parameter to generate partially processed data. The first transmission control unit 115 transmits the partially processed data and the second parameter to the smartphone 40. The smartphone 40 performs second processing on the partially processed data using the second parameter, and generates output sound data Do. The output sound data Do is transmitted from the smartphone 40 to the in-vehicle audio device 20A, and is output from the speaker 230 of the in-vehicle audio device 20A.
 また、音響処理を分担する複数の装置が、スマートフォン40および車載音響装置20Aであってもよい。例えば、音響処理が第1処理と第2処理とを含む場合、パラメータ決定部113は、第1処理で用いられる第1パラメータと、第2処理で用いられる第2パラメータとを決定する。例えば、出力音データDoが車載音響装置20Aから出力される場合、装置決定部118は、第1処理をスマートフォン40が実行すると決定し、第2処理を車載音響装置20Aが実行すると決定する。この場合、スマートフォン40は第1音響処理装置の一例であり、車載音響装置20Aは第2音響処理装置の一例である。第3送信制御部119は、第1パラメータをスマートフォン40に、第2パラメータを車載音響装置20Aに送信する。なお、パラメータとともに、入力音データDiがスマートフォン40に送信されてもよい。スマートフォン40は、第1パラメータを用いて、入力音データDiに第1処理を行い、一部処理済みデータを生成する。スマートフォン40は、車載音響装置20Aに一部処理済みデータを送信する。車載音響装置20Aは、第2パラメータを用いて、一部処理済みデータに対して第2処理を行い、出力音データDoを生成する。出力音データDoは、車載音響装置20Aのスピーカ230から出力される。第1パラメータおよび第2パラメータが、共にスマートフォン40に送信されてもよい。この場合、第2パラメータは、一部処理済みデータとともに、車載音響装置20Aに送信される。 Furthermore, the plurality of devices that share sound processing may be the smartphone 40 and the vehicle-mounted audio device 20A. For example, when the acoustic processing includes a first process and a second process, the parameter determining unit 113 determines the first parameter used in the first process and the second parameter used in the second process. For example, when the output sound data Do is output from the in-vehicle audio device 20A, the device determining unit 118 determines that the first process will be performed by the smartphone 40, and determines that the second process will be performed by the in-vehicle audio device 20A. In this case, the smartphone 40 is an example of a first sound processing device, and the in-vehicle audio device 20A is an example of a second sound processing device. The third transmission control unit 119 transmits the first parameter to the smartphone 40 and the second parameter to the in-vehicle audio device 20A. Note that the input sound data Di may be transmitted to the smartphone 40 together with the parameters. The smartphone 40 performs a first process on the input sound data Di using the first parameter, and generates partially processed data. The smartphone 40 transmits the partially processed data to the in-vehicle audio device 20A. The in-vehicle audio device 20A performs second processing on the partially processed data using the second parameter, and generates output sound data Do. The output sound data Do is output from the speaker 230 of the vehicle-mounted audio device 20A. Both the first parameter and the second parameter may be transmitted to the smartphone 40. In this case, the second parameter is transmitted to the in-vehicle audio device 20A together with the partially processed data.
 図13は、第4実施形態における音響サーバ10の制御装置103の動作を示すフローチャートである。以下のフローチャートにおいて、各種データは、ファイル単位で送受信されてもよいし、パケット単位で送受信されてもよい。制御装置103は、第2取得部112として機能し、入力音データDiの属性に関する第1情報、および、車載音響装置20Aに関する第2情報の少なくとも一方を取得する(ステップS50)。 FIG. 13 is a flowchart showing the operation of the control device 103 of the sound server 10 in the fourth embodiment. In the following flowchart, various data may be transmitted and received in file units or in packet units. The control device 103 functions as the second acquisition unit 112, and acquires at least one of first information related to the attributes of the input sound data Di and second information related to the in-vehicle sound device 20A (step S50).
 制御装置103は、パラメータ決定部113として機能し、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに対する音響処理に用いられるパラメータを決定する(ステップS51)。また、制御装置103は、装置決定部118として機能し、第1情報および第2情報の少なくとも一方に基づいて、音響処理装置を決定する(ステップS52)。 The control device 103 functions as the parameter determining unit 113, and determines parameters to be used for acoustic processing on the input sound data Di, based on at least one of the first information and the second information (step S51). Further, the control device 103 functions as the device determining unit 118, and determines a sound processing device based on at least one of the first information and the second information (step S52).
 音響サーバ10が音響処理装置に決定されていない場合(ステップS53:NO)、すなわち、他の装置が音響処理装置に決定された場合、制御装置103は、第3送信制御部119として機能し、ステップS51で決定されたパラメータを他の装置(スマートフォン40または車載音響装置20A)に送信する(ステップS54)。その後、制御装置103は、ステップS50に処理を戻す。 If the acoustic server 10 is not determined to be the acoustic processing device (step S53: NO), that is, if another device is determined to be the acoustic processing device, the control device 103 functions as the third transmission control unit 119, The parameters determined in step S51 are transmitted to another device (smartphone 40 or in-vehicle audio device 20A) (step S54). After that, the control device 103 returns the process to step S50.
 一方、音響サーバ10が音響処理装置として決定された場合(ステップS53:YES)、制御装置103は、第1取得部111として機能し、入力音データDiを取得する(ステップS55)。全ての音響処理を音響サーバ10が行う場合、言い換えると、音響処理を他の装置と分担しない場合(ステップS56:NO)、制御装置103は、出力音生成部114として機能し、ステップS51で決定したパラメータを用いて入力音データDiに音響処理を施すことによって出力音データDoを生成する(ステップS57)。制御装置103は、第1送信制御部115として機能し、出力音データDoを車載音響装置20Aに送信する(ステップS58)。その後、制御装置103は、ステップS50に処理を戻す。 On the other hand, if the audio server 10 is determined as the audio processing device (step S53: YES), the control device 103 functions as the first acquisition unit 111 and acquires the input sound data Di (step S55). When the acoustic server 10 performs all the acoustic processing, in other words, when the acoustic processing is not shared with other devices (step S56: NO), the control device 103 functions as the output sound generation section 114, and the control device 103 functions as the output sound generation section 114, The output sound data Do is generated by subjecting the input sound data Di to acoustic processing using the parameters obtained (step S57). The control device 103 functions as the first transmission control section 115 and transmits the output sound data Do to the in-vehicle audio device 20A (step S58). After that, the control device 103 returns the process to step S50.
 また、音響処理のうち一部を音響サーバ10が行う場合、言い換えると、音響処理を他の装置と分担する場合(ステップS56:YES)、制御装置103は、出力音生成部114として機能し、音響サーバ10が担当する一部の音響処理(例えば、第1処理)を入力音データDiに施して一部処理済みデータを生成する(ステップS59)。このとき、ステップS51で決定されたパラメータのうち一部(例えば、第1パラメータ)が用いられる。制御装置103は、第1送信制御部115として機能し、一部処理済みデータおよび他の装置が担当する処理で用いられるパラメータ(例えば、第2パラメータ)を、他の装置に送信する(ステップS60)。その後、制御装置103は、ステップS50に処理を戻す。 In addition, when the audio server 10 performs part of the audio processing, in other words, when the audio processing is shared with another device (step S56: YES), the control device 103 functions as the output sound generation unit 114, Partial audio processing (for example, first processing) handled by the audio server 10 is applied to the input sound data Di to generate partially processed data (step S59). At this time, some of the parameters determined in step S51 (for example, the first parameter) are used. The control device 103 functions as the first transmission control unit 115, and transmits the partially processed data and parameters (for example, second parameters) used in the processing handled by the other device to the other device (step S60). ). After that, the control device 103 returns the process to step S50.
 以上説明したように、第4実施形態において、音響サーバ10は、第1情報および第2情報の少なくとも一方に基づいて、入力音データDiに対する音響処理を行う音響処理装置を決定する。よって、入力音データに対する処理を適切な装置で実行することができ、システム全体の効率を向上させることができる。 As described above, in the fourth embodiment, the audio server 10 determines the audio processing device that performs audio processing on the input sound data Di, based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
 音響サーバ10で音響処理を行う場合、車載音響装置20Aまたはスマートフォン40で音響処理を行うのと比較して、車載音響装置20Aまたはスマートフォン40の処理負荷を軽減できる。特に、車載音響装置20Aまたはスマートフォン40に音響処理専用の制御装置を配置しなければ対応できない負荷が高い音響処理を音響サーバ10が行うことで、車載音響装置20Aまたはスマートフォン40の構成が簡素化され、この結果、車載音響装置20Aまたはスマートフォン40のコストが低減される。 When the audio server 10 performs the audio processing, the processing load on the in-vehicle audio device 20A or the smartphone 40 can be reduced compared to performing the audio processing in the in-vehicle audio device 20A or the smartphone 40. In particular, the configuration of the in-vehicle audio device 20A or the smartphone 40 is simplified because the audio server 10 performs the high-load acoustic processing that cannot be handled unless the in-vehicle audio device 20A or smartphone 40 is equipped with a control device dedicated to audio processing. As a result, the cost of the in-vehicle audio device 20A or the smartphone 40 is reduced.
 また、音響サーバ10と他の装置(車載音響装置20Aまたはスマートフォン40)とで音響処理を分担する場合、音響処理の処理負荷を分散することができる。よって、特定の装置への処理負荷の集中が回避される。 Furthermore, when the audio processing is shared between the audio server 10 and another device (the vehicle-mounted audio device 20A or the smartphone 40), the processing load of the audio processing can be distributed. Therefore, concentration of processing load on a specific device is avoided.
 また、音響サーバ10以外の他の装置で音響処理を行う場合、音響サーバ10の処理負荷が低減される。 Furthermore, when audio processing is performed by a device other than the audio server 10, the processing load on the audio server 10 is reduced.
E:変形例
 以上に例示した各態様に付加される具体的な変形の態様を以下に例示する。以下の例示から任意に選択された複数の態様を、相互に矛盾しない範囲で適宜に併合してもよい。
E: Modification Examples Specific modification modes added to each of the embodiments exemplified above are illustrated below. A plurality of aspects arbitrarily selected from the examples below may be combined as appropriate to the extent that they do not contradict each other.
[1]前述の各形態においては、音出力装置が車載音響装置20A~20Nである場合について説明した。これに限らず、音出力装置は、音データを利用可能な任意の電子機器であってよい。特に、音出力装置は、ユーザが携帯して使用する電子機器であってもよい。ユーザが携帯して使用する電子機器とは、具体的には、例えば、スマートフォンであり、または、ポータブルオーディオプレーヤ、パーソナルコンピュータ、タブレット端末、もしくは、スマートウォッチ等であってもよい。これらの電子機器は、スピーカを内蔵するか、またはスピーカもしくはイヤホンが外付けされる。電子機器が、イヤホンが外付けされたスマートフォンの場合を例にすると、音響サーバ10の第2取得部112は、第2情報としてスマートフォンに接続されたイヤホンの出力特性を取得する。パラメータ決定部113は、イヤホンの出力特性に合わせて音響処理のパラメータを決定する。これにより、イヤホンから出力される音の音質を向上させることができる。また、第2取得部112は、イヤホン(またはスマートフォン)の位置情報を第2情報として取得する。パラメータ決定部113は、イヤホンの位置に合わせて音響処理のパラメータを決定する。具体的には、パラメータ決定部113は、例えばイヤホンが屋外にある場合は低音域のゲインを大きくし、屋内にある場合は音量を低くする。これにより、音の聴取場所に応じた音響処理を行うことができ、イヤホンから出力される音の音質を向上させることができるとともに、イヤホンから出力される音の聞き取りやすさを向上させることができる。 [1] In each of the above embodiments, the case where the sound output device is the vehicle-mounted audio device 20A to 20N has been described. The sound output device is not limited to this, and may be any electronic device that can use sound data. In particular, the sound output device may be an electronic device that is carried and used by the user. Specifically, the electronic device carried and used by the user may be, for example, a smartphone, a portable audio player, a personal computer, a tablet terminal, a smart watch, or the like. These electronic devices either have built-in speakers or have external speakers or earphones attached. For example, when the electronic device is a smartphone with externally attached earphones, the second acquisition unit 112 of the acoustic server 10 acquires the output characteristics of the earphones connected to the smartphone as second information. The parameter determining unit 113 determines parameters for sound processing in accordance with the output characteristics of the earphone. Thereby, the quality of the sound output from the earphones can be improved. Further, the second acquisition unit 112 acquires the position information of the earphone (or smartphone) as second information. The parameter determining unit 113 determines parameters for sound processing according to the position of the earphone. Specifically, the parameter determining unit 113 increases the gain in the bass range when the earphone is outdoors, and lowers the volume when the earphone is indoors. As a result, it is possible to perform acoustic processing according to the listening location, and it is possible to improve the sound quality of the sound output from the earphones, as well as improve the ease of hearing the sound output from the earphones. .
 上記実施形態をスマートフォンに適用することで、例えば従来は音響処理に充てられていた処理リソースを他の処理に利用することができ、音データの利用によるスマートフォンのパフォーマンスの低下を防止することができる。また、スマートフォンの制御装置で音響処理を行わないため、スマートフォンの消費電力を抑制することができる。また、スマートフォンの制御装置で音響処理を行わないため、音データ処理用の高スペックな制御装置を用いる必要がなく、コストを削減することができる。 By applying the above embodiment to a smartphone, for example, processing resources conventionally devoted to sound processing can be used for other processing, and it is possible to prevent a decline in smartphone performance due to the use of sound data. . Furthermore, since the smartphone's control device does not perform audio processing, the smartphone's power consumption can be reduced. Furthermore, since the smartphone control device does not perform sound processing, there is no need to use a high-spec control device for processing sound data, and costs can be reduced.
[2]音響サーバ10の機能(第1取得部111、第2取得部112、パラメータ決定部113、出力音生成部114、第1送信制御部115、変更受付部116、車両音生成部117、装置決定部118および第3送信制御部119)は、前述の通り、制御装置103を構成する単数または複数のプロセッサと、記憶装置102に記憶されたプログラムPG1との協働により実現される。また、車載音響装置20A~20Nの機能(車両情報送信部251、設定受付部252、第2送信制御部253、受信制御部254および出力制御部255)は、前述の通り、制御装置216を構成する単数または複数のプロセッサと、記憶装置215に記憶されたプログラムPG2との協働により実現される。 [2] Functions of the acoustic server 10 (first acquisition unit 111, second acquisition unit 112, parameter determination unit 113, output sound generation unit 114, first transmission control unit 115, change reception unit 116, vehicle sound generation unit 117, As described above, the device determining unit 118 and the third transmission control unit 119) are realized by cooperation of one or more processors forming the control device 103 and the program PG1 stored in the storage device 102. Furthermore, the functions of the in-vehicle audio devices 20A to 20N (vehicle information transmitting section 251, setting receiving section 252, second transmission control section 253, reception control section 254, and output control section 255) constitute the control device 216. This is realized by the cooperation of one or more processors and the program PG2 stored in the storage device 215.
 以上のプログラムは、コンピュータが読取可能な記録媒体に格納された形態で提供されてコンピュータにインストールされ得る。記録媒体は、例えば非一過性(non-transitory)の記録媒体であり、CD-ROM等の光学式記録媒体(光ディスク)が好例であるが、半導体記録媒体または磁気記録媒体等の公知の任意の形式の記録媒体も包含される。なお、非一過性の記録媒体とは、一過性の伝搬信号(transitory, propagating signal)を除く任意の記録媒体を含み、揮発性の記録媒体も除外されない。また、配信装置がネットワークNを介してプログラムを配信する構成では、当該配信装置においてプログラムを記憶する記録媒体が、前述の非一過性の記録媒体に相当する。 The above program may be provided in a form stored in a computer-readable recording medium and installed on a computer. The recording medium is, for example, a non-transitory recording medium, and an optical recording medium (optical disk) such as a CD-ROM is a good example, but any known recording medium such as a semiconductor recording medium or a magnetic recording medium can be used. Also included are recording media in the form of. Note that the non-transitory recording medium includes any recording medium excluding transitory, propagating signals, and does not exclude volatile recording media. Further, in a configuration in which a distribution device distributes a program via the network N, a recording medium that stores a program in the distribution device corresponds to the above-mentioned non-transitory recording medium.
 本明細書において、「AおよびBの少なくとも一方」(“at least one of A and B”または“at least one of A or B”)は、「(A)、(B)、または、(AおよびB)」を意味する。すなわち、「AおよびBの少なくとも一方」は、「AおよびBのうちの1以上」(one or more of A and B)、あるいは、「AとBとのグループから選択された少なくとも1つ」(at least one selected from the group of A and B)と換言される。 In this specification, "at least one of A and B" or "at least one of A or B" means "(A), (B), or (A and B)". B). In other words, "at least one of A and B" means "one or more of A and B" or "at least one selected from the group of A and B" ( This is rephrased as at least one selected from the group of A and B).
 「A、B、およびCの少なくとも1つ」(“at least one of A, B and C”または“at least one of A, B or C”)は、「(A)、(B)、(C)、(AおよびB)、(AおよびC)、(BおよびC)、または(A、BおよびC)」を意味する。すなわち、「A、B、およびCの少なくとも1つ」は、「A、B、およびCのうちの1以上」(one or more of A, B and C)、あるいは、「AとBとCとのグループから選択された少なくとも1つ」(at least one selected from the group of A, B, and C)と換言される。 “At least one of A, B, and C” (“at least one of A, B and C” or “at least one of A, B or C”) means “(A), (B), (C ), (A and B), (A and C), (B and C), or (A, B and C). In other words, "at least one of A, B, and C" means "one or more of A, B, and C," or "one or more of A, B, and C." "at least one selected from the group of A, B, and C".
C:付記
 以上に例示した形態から、例えば以下の構成が把握される。
C: Supplementary Note From the above-described exemplary embodiments, the following configurations, for example, can be understood.
 本開示のひとつの態様(態様1)に係る情報処理装置は、複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置であって、入力音データを取得するデータ取得部と、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、を備える。以上の構成によれば、情報処理装置は、入力音データに対して音響処理を施すことで出力音データを生成し、出力音データを一の音出力装置に送信する。従って、音響処理を行うための制御装置を一の音出力装置に配置しなくてもよい。このため、一の音出力装置の構成が簡素化され、一の音出力装置のコストが低減される。また、情報処理装置は、入力音データに対する音響処理に用いられるパラメータを、入力音データの属性に関する第1情報、および、一の音出力装置における音の出力に関する第2情報の少なくとも一方に基づいて決定する。よって、音響処理のパラメータが適切に設定されるので、出力音データに基づく音の音質が向上される。 An information processing device according to one aspect (aspect 1) of the present disclosure is an information processing device that generates a plurality of output sound data used in each of a plurality of sound output devices, and includes data acquisition for acquiring input sound data. an information acquisition unit that acquires at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices; a parameter determination unit that determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data based on at least one of second information; and performing the acoustic processing on the input sound data using the parameter. and a data transmission control section that transmits the output sound data to the first sound output device via a network. According to the above configuration, the information processing device generates output sound data by performing acoustic processing on input sound data, and transmits the output sound data to one sound output device. Therefore, it is not necessary to arrange a control device for performing sound processing in one sound output device. Therefore, the configuration of one sound output device is simplified and the cost of one sound output device is reduced. Further, the information processing device determines the parameters used for acoustic processing of the input sound data based on at least one of first information regarding attributes of the input sound data and second information regarding the output of sound from the first sound output device. decide. Therefore, the sound processing parameters are appropriately set, so that the sound quality of the sound based on the output sound data is improved.
 態様1の具体例(態様2)において、前記情報取得部は、前記第2情報として、前記一の音出力装置の音響特性を示す情報を取得する。以上の構成によれば、音響処理のパラメータに一の音出力装置の音響特性が反映されるので、情報処理装置は、一の音出力装置で出力音データを利用するのに適した音響処理を行うことができる。 In a specific example of aspect 1 (aspect 2), the information acquisition unit acquires information indicating acoustic characteristics of the one sound output device as the second information. According to the above configuration, the acoustic characteristics of one sound output device are reflected in the sound processing parameters, so that the information processing device performs sound processing suitable for using output sound data with one sound output device. It can be carried out.
 態様1の具体例(態様3)において、前記情報取得部は、前記第2情報として、前記一の音出力装置の周囲で発生する音に関する情報を取得する。以上の構成によれば、音響処理のパラメータに一の音出力装置の周囲の音が反映されるので、情報処理装置は、周囲で音が発生している環境で出力音データを利用するのに適した音響処理を行うことができる。 In a specific example of aspect 1 (aspect 3), the information acquisition unit acquires information regarding sounds generated around the one sound output device as the second information. According to the above configuration, the sound around the first sound output device is reflected in the sound processing parameters, so that the information processing device can use the output sound data in an environment where sounds are generated in the surroundings. Appropriate acoustic processing can be performed.
 態様3の具体例(態様4)において、前記情報取得部は、前記出力音データの送信中、前記音に関する情報を前記一の音出力装置から継続して取得し、前記パラメータ決定部は、前記音に関する情報が変化した場合、前記パラメータを再決定する。以上の構成によれば、一の音出力装置の周囲の音の変化が音響処理のパラメータに反映されるので、情報処理装置は、周囲の音が刻々と変化する環境下で出力音データを利用するのに適した音響処理を行うことができる。 In a specific example of aspect 3 (aspect 4), the information acquisition unit continuously acquires information regarding the sound from the one sound output device during transmission of the output sound data, and the parameter determination unit If the information regarding the sound changes, the parameters are redetermined. According to the above configuration, changes in the surrounding sound of the first sound output device are reflected in the sound processing parameters, so that the information processing device can utilize the output sound data in an environment where the surrounding sound changes from moment to moment. It is possible to perform sound processing suitable for
 態様1の具体例(態様5)において、前記情報取得部は、前記第1情報として、前記入力音データのフォーマットに関する情報を取得し、前記第2情報として、前記一の音出力装置で出力可能な音データのフォーマットに関する情報を取得し、前記パラメータ決定部は、前記パラメータとして、前記入力音データのフォーマット変換処理の要否、および前記フォーマット変換処理が要の場合の変換先フォーマットを決定する。以上の構成によれば、ユーザは、入力音データのフォーマットを意識することなく入力音データを一の音出力装置で利用することができる。 In a specific example of aspect 1 (aspect 5), the information acquisition unit acquires information regarding the format of the input sound data as the first information, and can output it as the second information with the one sound output device. The parameter determination unit determines, as the parameters, whether or not format conversion processing of the input sound data is necessary, and a conversion destination format if the format conversion processing is necessary. According to the above configuration, the user can use the input sound data on one sound output device without being aware of the format of the input sound data.
 態様1の具体例(態様6)において、情報処理装置は、前記出力音データの送信中、前記一の音出力装置を使用するユーザから前記パラメータの変更を受け付ける受付部を更に備え、前記パラメータ決定部は、前記受付部が前記パラメータの変更を受け付けた場合、前記音響処理に用いる前記パラメータを前記ユーザにより変更されたパラメータに変更する。以上の構成によれば、情報処理装置は、ユーザの好み、または第1情報または第2情報には反映されない状況を反映して音響処理を行うことができる。 In a specific example of aspect 1 (aspect 6), the information processing device further includes a reception unit that accepts a change in the parameter from a user using the first sound output device while the output sound data is being transmitted, and the information processing device The unit changes the parameters used for the audio processing to the parameters changed by the user, when the accepting unit accepts the change of the parameters. According to the above configuration, the information processing device can perform acoustic processing reflecting the user's preference or the situation that is not reflected in the first information or the second information.
 態様1の具体例(態様7)において、前記一の音出力装置は、車両の車室内に前記音を出力する車載音響装置である。以上の構成によれば、情報処理装置は、建物内等と異なり、音の聴取環境が整っていない車両の車室内に出力される音の音質を向上させることができる。 In a specific example of Aspect 1 (Aspect 7), the one sound output device is an in-vehicle audio device that outputs the sound into the cabin of the vehicle. According to the above configuration, the information processing device can improve the sound quality of the sound output into the cabin of the vehicle, which has a poor sound listening environment unlike inside a building.
 態様7の具体例(態様8)において、前記情報取得部は、前記第2情報として、前記車両に対する操作状態を示す情報、および前記車両の走行状態を示す情報の少なくとも一方を取得する。以上の構成によれば、情報処理装置は、車両に対する操作状態、および車両の走行状態の少なくとも一方を入力音データの音響処理に反映させることができる。 In a specific example of Aspect 7 (Aspect 8), the information acquisition unit acquires, as the second information, at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle. According to the above configuration, the information processing device can reflect at least one of the operation state of the vehicle and the running state of the vehicle in the acoustic processing of input sound data.
 態様8の具体例(態様9)において、情報処理装置は、前記車両に対する操作状態を示す情報、および前記車両の走行状態を示す情報の少なくとも一方に基づいて、前記一の音出力装置から出力する音を示す車両音データを生成する車両音生成部を更に備え、前記データ送信制御部は、前記ネットワークを介して、前記車両音データを前記一の音出力装置へ送信する。以上の構成によれば、情報処理装置は、車両に設けられた制御装置で車両音データを生成する場合と比較して、車両に設けられた制御装置の処理負荷を軽減させることができる。 In a specific example of aspect 8 (aspect 9), the information processing device outputs from the one sound output device based on at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle. The apparatus further includes a vehicle sound generation section that generates vehicle sound data representing a sound, and the data transmission control section transmits the vehicle sound data to the one sound output device via the network. According to the above configuration, the information processing device can reduce the processing load on the control device provided in the vehicle, compared to the case where vehicle sound data is generated by the control device provided in the vehicle.
 本開示のひとつの態様(態様10)に係る情報処理装置は、複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置であって、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、を備える。以上の構成によれば、情報処理装置は、第1情報および前記第2情報の少なくとも一方に基づいて、入力音データに対して音響処理を行う音響処理装置を決定する。よって、入力音データに対する処理を適切な装置で実行することができ、システム全体の効率を向上させることができる。 An information processing device according to one aspect (aspect 10) of the present disclosure is an information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, and the information processing device performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices. an information acquisition unit that acquires at least one of the first information and second information regarding one of the plurality of sound output devices; a parameter determination unit that determines parameters to be used in acoustic processing for imparting acoustic effects to input sound data; and an acoustic device that performs the acoustic processing on the input sound data based on at least one of the first information and the second information. and a device determining unit that determines a processing device. According to the above configuration, the information processing device determines a sound processing device that performs sound processing on input sound data based on at least one of the first information and the second information. Therefore, the input sound data can be processed by an appropriate device, and the efficiency of the entire system can be improved.
 態様10の具体例(態様11)において、情報処理装置は、前記装置決定部により、当該情報処理装置が前記音響処理装置に決定された場合、前記入力音データを取得するデータ取得部と、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、更に備える。以上の構成によれば、情報処理装置が入力音データに対する処理を行う。よって、他の装置で音響処理を行うのと比較して、他の装置の処理負荷を軽減できる。特に、他の装置に音響処理専用の制御装置を配置しなければ対応できない負荷が高い音響処理を、情報処理装置が行うことで、他の装置の構成が簡素化され、この結果、他の装置のコストが低減される。 In a specific example of aspect 10 (aspect 11), the information processing device includes a data acquisition unit that acquires the input sound data when the device determination unit determines that the information processing device is the sound processing device; an output sound generation unit that generates output sound data to be used in the first sound output device by subjecting the input sound data to the acoustic processing using parameters; The apparatus further includes a data transmission control section for transmitting data to the sound output device. According to the above configuration, the information processing device processes input sound data. Therefore, compared to performing sound processing in another device, the processing load on the other device can be reduced. In particular, by allowing the information processing device to perform high-load acoustic processing that cannot be handled without installing a dedicated control device for acoustic processing in other devices, the configuration of other devices can be simplified, and as a result, the configuration of other devices can be costs are reduced.
 態様11の具体例(態様12)において、前記音響処理は、第1処理と第2処理とを含み、前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、前記パラメータ決定部は、前記第1パラメータと、前記第2パラメータとを決定し、前記装置決定部は、前記第1処理を当該情報処理装置が実行すると決定し、前記第2処理を当該情報処理装置以外の他の装置が実行すると決定し、前記出力音生成部は、前記第1パラメータを用いて前入力音データに前記第1処理を施して一部処理済みデータを生成し、前記データ送信制御部は、前記他の装置に前記一部処理済みデータと、前記第2パラメータとを送信する。以上の構成によれば、入力音データに対する複数の音響処理を、情報処理装置と他の装置とで分散して行う。よって、音響処理の処理負荷が分散される。 In a specific example of aspect 11 (aspect 12), the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process. a second parameter, the parameter determining unit determines the first parameter and the second parameter, the device determining unit determines that the information processing device executes the first process, and the device determining unit determines that the information processing device executes the first process; It is determined that a device other than the information processing device executes the second processing, and the output sound generation unit performs the first processing on the previous input sound data using the first parameter to generate partially processed data. and the data transmission control unit transmits the partially processed data and the second parameter to the other device. According to the above configuration, a plurality of acoustic processes for input sound data are performed in a distributed manner between the information processing device and other devices. Therefore, the processing load of audio processing is distributed.
態様10の具体例(態様13)において、情報処理装置は、前記装置決定部により、当該情報処理装置以外の他の装置が前記音響処理装置に決定された場合、前記パラメータを前記他の装置に送信するパラメータ送信制御部を更に備える。以上の構成によれば、音響処理装置に決定された他の装置に、音響処理に用いるパラメータが提供され、音響処理が適切に実行される。 In a specific example of aspect 10 (aspect 13), when the device determining unit determines that another device other than the information processing device is the sound processing device, the information processing device transmits the parameters to the other device. The apparatus further includes a parameter transmission control section for transmitting parameters. According to the above configuration, the other device determined as the sound processing device is provided with the parameters used for the sound processing, and the sound processing is appropriately executed.
 態様13の具体例(態様14)において、前記音響処理は、第1処理と第2処理とを含み、前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、前記パラメータ決定部は、前記第1パラメータと、前記第2パラメータとを決定し、前記装置決定部は、前記第1処理を第1音響処理装置が実行すると決定し、前記第2処理を第2音響処理装置が実行すると決定し、前記パラメータ送信制御部は、前記第1パラメータを前記第1音響処理装置に送信し、前記第2パラメータを前記第2音響処理装置に送信する。以上の構成によれば、情報処理装置以外の他の複数の装置で音響処理を分担するので、各装置における処理負荷が軽減される。 In a specific example of aspect 13 (aspect 14), the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process. a second parameter, the parameter determining unit determines the first parameter and the second parameter, and the device determining unit determines that a first sound processing device executes the first process, It is determined that the second process is to be performed by a second sound processing device, and the parameter transmission control unit transmits the first parameter to the first sound processing device and sends the second parameter to the second sound processing device. Send. According to the above configuration, since the sound processing is shared among multiple devices other than the information processing device, the processing load on each device is reduced.
 態様1または10の具体例(態様15)において、前記一の音出力装置は、ユーザが携帯して使用する電子機器である。以上の構成によれば、ユーザが携帯して使用する電子機器から出力される音の音質が向上される。 In a specific example of aspect 1 or 10 (aspect 15), the one sound output device is an electronic device that is carried and used by the user. According to the above configuration, the quality of sound output from the electronic device carried and used by the user is improved.
 本開示のひとつの態様(態様16)に係る情報処理システムは、複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置と、を備える情報処理システムであって、前記情報処理装置は、入力音データを取得するデータ取得部と、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、を備える。 An information processing system according to one aspect (aspect 16) of the present disclosure includes a plurality of sound output devices and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices. The information processing system includes a data acquisition unit that acquires input sound data, first information regarding attributes of the input sound data, and one sound output device among the plurality of sound output devices. an information acquisition unit that acquires at least one of second information about the input sound data, and determines a parameter to be used in acoustic processing for imparting an acoustic effect to the input sound data, based on at least one of the first information and the second information. a parameter determination section; an output sound generation section that performs the acoustic processing on the input sound data using the parameters to generate output sound data to be used in the first sound output device; and a data transmission control section that transmits sound data to the one sound output device.
 本開示のひとつの態様(態様17)に係る情報処理システムは、複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置と、を備える情報処理システムであって、前記情報処理装置は、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、を備える。 An information processing system according to one aspect (aspect 17) of the present disclosure includes a plurality of sound output devices and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices. An information processing system comprising: information processing apparatus that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices. an acquisition unit; a parameter determination unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information; and the first information and the second information. A device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the second information.
 本開示のひとつの態様(態様18)に係る情報処理方法は、コンピュータによって実現され、複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理方法であって、入力音データを取得し、前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成し、ネットワークを介して、前記出力音データを前記一の音出力装置へ送信する。 An information processing method according to one aspect (aspect 18) of the present disclosure is an information processing method that is realized by a computer and generates a plurality of output sound data to be used in each of a plurality of sound output devices, , obtain at least one of first information regarding the attribute of the input sound data and second information regarding one of the plurality of sound output devices, and acquire the first information and the second information. The first sound output device determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the above, and performs the acoustic processing on the input sound data using the parameters. and transmits the output sound data to the one sound output device via the network.
 本開示のひとつの態様(態様19)に係る情報処理方法は、コンピュータによって実現され、複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理方法であって、入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する。 An information processing method according to one aspect (aspect 19) of the present disclosure is an information processing method that is realized by a computer and performs processing on multiple output sound data used in each of multiple sound output devices, and obtains at least one of first information related to attributes of input sound data and second information related to one of the multiple sound output devices, determines parameters to be used in sound processing that imparts sound effects to the input sound data based on at least one of the first information and the second information, and determines a sound processing device that performs the sound processing on the input sound data based on at least one of the first information and the second information.
 態様19の具体例(態様20)において、前記音響処理は、第1処理と第2処理とを含み、前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、前記第1パラメータと、前記第2パラメータとを決定し、前記第1処理を当該コンピュータが実行すると決定し、前記第2処理を当該コンピュータ以外の他の装置が実行すると決定し、前記入力音データを取得し、前記第1パラメータを用いて前記入力音データに前記第1処理を施して一部処理済みデータを生成し、前記他の装置に前記一部処理済みデータと、前記第2パラメータとを送信する。 In a specific example of aspect 19 (aspect 20), the acoustic processing includes a first process and a second process, and the parameters include a first parameter used in the first process and a first parameter used in the second process. a second parameter, the first parameter and the second parameter are determined, the first process is determined to be executed by the computer, and the second process is executed by another device other than the computer; determine the input sound data, perform the first processing on the input sound data using the first parameter to generate partially processed data, and send the partially processed data to the other device. and the second parameter.
 1,2…情報処理システム、10…音響サーバ、20(20A~20N)…車載音響装置、30…配信サーバ、40…スマートフォン、50…車両ECU、52…ナビゲーション装置、54…カメラ、101…通信装置、102…記憶装置、103…制御装置、111…第1取得部、112…第2取得部、113…パラメータ決定部、114…出力音生成部、115…第1送信制御部、116…変更受付部、117…車両音生成部、118…装置決定部、119…第3送信制御部、200…ヘッドユニット、211…通信装置、212…操作装置、213…音データ取得装置、214…マイク、215…記憶装置、216…制御装置、220…アンプ、230(230A~230F)…スピーカ、240…音響制御装置、251…車両情報送信部、252…設定受付部、253…第2送信制御部、254…受信制御部、255…出力制御部、C…車両、Di…入力音データ、N…ネットワーク。 DESCRIPTION OF SYMBOLS 1, 2...Information processing system, 10...Acoustic server, 20 (20A-20N)...In-vehicle audio device, 30...Distribution server, 40...Smartphone, 50...Vehicle ECU, 52...Navigation device, 54...Camera, 101...Communication Device, 102...Storage device, 103...Control device, 111...First acquisition section, 112...Second acquisition section, 113...Parameter determination section, 114...Output sound generation section, 115...First transmission control section, 116...Change Reception section, 117... Vehicle sound generation section, 118... Device determination section, 119... Third transmission control section, 200... Head unit, 211... Communication device, 212... Operating device, 213... Sound data acquisition device, 214... Microphone, 215...Storage device, 216...Control device, 220...Amplifier, 230 (230A to 230F)...Speaker, 240...Sound control device, 251...Vehicle information transmitting section, 252...Setting reception section, 253...Second transmission control section, 254...Reception control unit, 255...Output control unit, C...Vehicle, Di...Input sound data, N...Network.

Claims (20)

  1.  複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置であって、
     入力音データを取得するデータ取得部と、
     前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、
     前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、
     ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、
     を備える情報処理装置。
    An information processing device that generates a plurality of pieces of output sound data to be used in each of a plurality of sound output devices,
    A data acquisition unit that acquires input sound data;
    an information acquisition unit that acquires at least one of first information on an attribute of the input sound data and second information on one of the plurality of sound output devices;
    a parameter determination unit that determines parameters to be used in sound processing for imparting a sound effect to the input sound data based on at least one of the first information and the second information;
    an output sound generation unit that generates output sound data to be used in the one sound output device by performing the acoustic processing on the input sound data using the parameters;
    a data transmission control unit that transmits the output sound data to the one sound output device via a network;
    An information processing device comprising:
  2.  前記情報取得部は、前記第2情報として、前記一の音出力装置の音響特性を示す情報を取得する、
     請求項1記載の情報処理装置。
    The information acquisition unit acquires information indicating acoustic characteristics of the one sound output device as the second information.
    The information processing device according to claim 1.
  3.  前記情報取得部は、前記第2情報として、前記一の音出力装置の周囲で発生する音に関する情報を取得する、
     請求項1記載の情報処理装置。
    The information acquisition unit acquires information regarding sounds generated around the first sound output device as the second information.
    The information processing device according to claim 1.
  4.  前記情報取得部は、前記出力音データの送信中、前記音に関する情報を前記一の音出力装置から継続して取得し、
     前記パラメータ決定部は、前記音に関する情報が変化した場合、前記パラメータを再決定する、
     請求項3記載の情報処理装置。
    The information acquisition unit continuously acquires information regarding the sound from the one sound output device while the output sound data is being transmitted;
    The parameter determining unit re-determines the parameters when the information regarding the sound changes.
    The information processing device according to claim 3.
  5.  前記情報取得部は、前記第1情報として、前記入力音データのフォーマットに関する情報を取得し、前記第2情報として、前記一の音出力装置で出力可能な音データのフォーマットに関する情報を取得し、
     前記パラメータ決定部は、前記パラメータとして、前記入力音データのフォーマット変換処理の要否、および前記フォーマット変換処理が要の場合の変換先フォーマットを決定する、
     請求項1記載の情報処理装置。
    The information acquisition unit acquires information regarding a format of the input sound data as the first information, and acquires information regarding a format of sound data that can be output by the first sound output device as the second information,
    The parameter determining unit determines, as the parameters, whether format conversion processing of the input sound data is necessary and a conversion destination format when the format conversion processing is necessary.
    The information processing device according to claim 1.
  6.  前記出力音データの送信中、前記一の音出力装置を使用するユーザから前記パラメータの変更を受け付ける受付部を更に備え、
     前記パラメータ決定部は、
     前記受付部が前記パラメータの変更を受け付けた場合、前記音響処理に用いる前記パラメータを前記ユーザにより変更されたパラメータに変更する、
     請求項1記載の情報処理装置。
    further comprising a reception unit that accepts changes to the parameters from a user using the first sound output device while the output sound data is being transmitted;
    The parameter determining unit includes:
    When the accepting unit accepts a change in the parameter, changing the parameter used for the audio processing to the parameter changed by the user;
    The information processing device according to claim 1.
  7.  前記一の音出力装置は、車両の車室内に前記音を出力する車載音響装置である、
     請求項1記載の情報処理装置。
    The first sound output device is an in-vehicle audio device that outputs the sound into the cabin of the vehicle.
    The information processing device according to claim 1.
  8.  前記情報取得部は、前記第2情報として、前記車両に対する操作状態を示す情報、および前記車両の走行状態を示す情報の少なくとも一方を取得する、
     請求項7記載の情報処理装置。
    The information acquisition unit acquires, as the second information, at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle.
    The information processing device according to claim 7.
  9.  前記車両に対する操作状態を示す情報、および前記車両の走行状態を示す情報の少なくとも一方に基づいて、前記一の音出力装置から出力する音を示す車両音データを生成する車両音生成部を更に備え、
     前記データ送信制御部は、前記ネットワークを介して、前記車両音データを前記一の音出力装置へ送信する、
     請求項8記載の情報処理装置。
    The vehicle further includes a vehicle sound generation unit that generates vehicle sound data indicating a sound output from the one sound output device based on at least one of information indicating an operating state of the vehicle and information indicating a running state of the vehicle. ,
    The data transmission control unit transmits the vehicle sound data to the one sound output device via the network.
    The information processing device according to claim 8.
  10.  複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置であって、
     入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、
     を備える情報処理装置。
    An information processing device that performs processing regarding a plurality of output sound data used in each of a plurality of sound output devices, the information processing device comprising:
    an information acquisition unit that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices;
    a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data, based on at least one of the first information and the second information;
    a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the first information and the second information;
    An information processing device comprising:
  11.  前記装置決定部により、当該情報処理装置が前記音響処理装置に決定された場合、前記入力音データを取得するデータ取得部と、
     前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、
     ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、更に備える、
     請求項10記載の情報処理装置。
    a data acquisition unit that acquires the input sound data when the information processing device is determined to be the audio processing device by the device determination unit;
    an output sound generation unit that generates output sound data used in the first sound output device by performing the acoustic processing on the input sound data using the parameters;
    further comprising: a data transmission control unit that transmits the output sound data to the first sound output device via a network;
    The information processing device according to claim 10.
  12.  前記音響処理は、第1処理と第2処理とを含み、
     前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、
     前記パラメータ決定部は、前記第1パラメータと、前記第2パラメータとを決定し、
     前記装置決定部は、前記第1処理を当該情報処理装置が実行すると決定し、前記第2処理を当該情報処理装置以外の他の装置が実行すると決定し、
     前記出力音生成部は、前記第1パラメータを用いて前入力音データに前記第1処理を施して一部処理済みデータを生成し、
     前記データ送信制御部は、前記他の装置に前記一部処理済みデータと、前記第2パラメータとを送信する、
     請求項11記載の情報処理装置。
    The acoustic processing includes a first processing and a second processing,
    The parameters include a first parameter used in the first process and a second parameter used in the second process,
    The parameter determining unit determines the first parameter and the second parameter,
    The device determining unit determines that the first process is performed by the information processing device, and determines that the second process is performed by another device other than the information processing device,
    The output sound generation unit performs the first processing on the previous input sound data using the first parameter to generate partially processed data;
    the data transmission control unit transmits the partially processed data and the second parameter to the other device;
    The information processing device according to claim 11.
  13.  前記装置決定部により、当該情報処理装置以外の他の装置が前記音響処理装置に決定された場合、前記パラメータを前記他の装置に送信するパラメータ送信制御部を更に備える、
     請求項10記載の情報処理装置。
    further comprising a parameter transmission control unit that transmits the parameters to the other device when the device determination unit determines that another device other than the information processing device is the audio processing device;
    The information processing device according to claim 10.
  14.  前記音響処理は、第1処理と第2処理とを含み、
     前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、
     前記パラメータ決定部は、前記第1パラメータと、前記第2パラメータとを決定し、
     前記装置決定部は、前記第1処理を第1音響処理装置が実行すると決定し、前記第2処理を第2音響処理装置が実行すると決定し、
     前記パラメータ送信制御部は、前記第1パラメータを前記第1音響処理装置に送信し、前記第2パラメータを前記第2音響処理装置に送信する、
     請求項13記載の情報処理装置。
    The acoustic processing includes a first processing and a second processing,
    The parameters include a first parameter used in the first process and a second parameter used in the second process,
    The parameter determining unit determines the first parameter and the second parameter,
    The device determining unit determines that a first sound processing device executes the first process, and determines that a second sound processing device executes the second process,
    The parameter transmission control unit transmits the first parameter to the first sound processing device, and transmits the second parameter to the second sound processing device.
    The information processing device according to claim 13.
  15.  前記一の音出力装置は、ユーザが携帯して使用する電子機器である、
     請求項1または10記載の情報処理装置。
    The first sound output device is an electronic device that is carried and used by the user.
    The information processing device according to claim 1 or 10.
  16.  複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理装置と、を備える情報処理システムであって、
     前記情報処理装置は、
     入力音データを取得するデータ取得部と、
     前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、
     前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成する出力音生成部と、
     ネットワークを介して、前記出力音データを前記一の音出力装置へ送信するデータ送信制御部と、を備える、
     情報処理システム。
    An information processing system comprising a plurality of sound output devices and an information processing device that generates a plurality of output sound data used in each of the plurality of sound output devices, the information processing system comprising:
    The information processing device includes:
    a data acquisition unit that acquires input sound data;
    an information acquisition unit that acquires at least one of first information regarding attributes of the input sound data and second information regarding one of the plurality of sound output devices;
    a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data, based on at least one of the first information and the second information;
    an output sound generation unit that generates output sound data used in the first sound output device by performing the acoustic processing on the input sound data using the parameters;
    a data transmission control unit that transmits the output sound data to the first sound output device via a network;
    Information processing system.
  17.  複数の音出力装置と、前記複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理装置と、を備える情報処理システムであって、
     前記情報処理装置は、
     入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得する情報取得部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定するパラメータ決定部と、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する装置決定部と、を備える、
     情報処理システム。
    An information processing system comprising: a plurality of sound output devices; and an information processing device that performs processing regarding a plurality of output sound data used in each of the plurality of sound output devices, the information processing system comprising:
    The information processing device includes:
    an information acquisition unit that acquires at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices;
    a parameter determining unit that determines parameters to be used in acoustic processing for imparting acoustic effects to the input sound data, based on at least one of the first information and the second information;
    a device determining unit that determines a sound processing device that performs the sound processing on the input sound data based on at least one of the first information and the second information;
    Information processing system.
  18.  複数の音出力装置の各々において用いられる複数の出力音データを生成する情報処理方法であって、
     入力音データを取得し、
     前記入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、
     前記パラメータを用いて前記入力音データに前記音響処理を施すことによって前記一の音出力装置で用いられる出力音データを生成し、
     ネットワークを介して、前記出力音データを前記一の音出力装置へ送信する、
     コンピュータにより実現される情報処理方法。
    An information processing method for generating a plurality of output sound data used in each of a plurality of sound output devices, the method comprising:
    Get the input sound data,
    acquiring at least one of first information regarding attributes of the input sound data and second information regarding one of the plurality of sound output devices;
    determining parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information;
    generating output sound data to be used in the first sound output device by performing the acoustic processing on the input sound data using the parameters;
    transmitting the output sound data to the first sound output device via a network;
    An information processing method realized by a computer.
  19.  複数の音出力装置の各々において用いられる複数の出力音データに関する処理を行う情報処理方法であって、
     入力音データの属性に関する第1情報、および、前記複数の音出力装置のうち一の音出力装置に関する第2情報の少なくとも一方を取得し、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに音響効果を付与する音響処理に用いられるパラメータを決定し、
     前記第1情報および前記第2情報の少なくとも一方に基づいて、前記入力音データに前記音響処理を行う音響処理装置を決定する、
     コンピュータにより実現される情報処理方法。
    An information processing method for processing a plurality of output sound data used in each of a plurality of sound output devices, the method comprising:
    acquiring at least one of first information regarding attributes of input sound data and second information regarding one of the plurality of sound output devices;
    determining parameters to be used in acoustic processing for imparting acoustic effects to the input sound data based on at least one of the first information and the second information;
    determining an audio processing device that performs the audio processing on the input sound data based on at least one of the first information and the second information;
    An information processing method realized by a computer.
  20.  前記音響処理は、第1処理と第2処理とを含み、
     前記パラメータは、前記第1処理で用いられる第1パラメータと、前記第2処理で用いられる第2パラメータとを含み、
     前記第1パラメータと、前記第2パラメータとを決定し、
     前記第1処理を当該コンピュータが実行すると決定し、前記第2処理を当該コンピュータ以外の他の装置が実行すると決定し、
     前記入力音データを取得し、
     前記第1パラメータを用いて前記入力音データに前記第1処理を施して一部処理済みデータを生成し、
     前記他の装置に前記一部処理済みデータと、前記第2パラメータとを送信する、
     請求項19記載の情報処理方法。
    The acoustic processing includes a first processing and a second processing,
    The parameters include a first parameter used in the first process and a second parameter used in the second process,
    determining the first parameter and the second parameter;
    determining that the computer executes the first process; determining that a device other than the computer executes the second process;
    Obtaining the input sound data,
    performing the first processing on the input sound data using the first parameter to generate partially processed data;
    transmitting the partially processed data and the second parameter to the other device;
    The information processing method according to claim 19.
PCT/JP2023/026774 2022-09-21 2023-07-21 Information processing device, information processing system and information processing method WO2024062757A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022149912 2022-09-21
JP2022-149912 2022-09-21

Publications (1)

Publication Number Publication Date
WO2024062757A1 true WO2024062757A1 (en) 2024-03-28

Family

ID=90454419

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/026774 WO2024062757A1 (en) 2022-09-21 2023-07-21 Information processing device, information processing system and information processing method

Country Status (1)

Country Link
WO (1) WO2024062757A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000010576A (en) * 1998-06-24 2000-01-14 Yamaha Motor Co Ltd Engine simulated sound generating device
JP2014069656A (en) * 2012-09-28 2014-04-21 Pioneer Electronic Corp Acoustic unit, output sound management system, terminal device, and output sound control method
JP2016072973A (en) * 2014-09-24 2016-05-09 韓國電子通信研究院Electronics and Telecommunications Research Institute Audio metadata providing apparatus and audio data playback apparatus to support dynamic format conversion, methods performed by the apparatuses, and computer-readable recording medium with the dynamic format conversion recorded thereon
JP2020109968A (en) * 2019-01-04 2020-07-16 ハーマン インターナショナル インダストリーズ インコーポレイテッド Customized audio processing based on user-specific audio information and hardware-specific audio information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000010576A (en) * 1998-06-24 2000-01-14 Yamaha Motor Co Ltd Engine simulated sound generating device
JP2014069656A (en) * 2012-09-28 2014-04-21 Pioneer Electronic Corp Acoustic unit, output sound management system, terminal device, and output sound control method
JP2016072973A (en) * 2014-09-24 2016-05-09 韓國電子通信研究院Electronics and Telecommunications Research Institute Audio metadata providing apparatus and audio data playback apparatus to support dynamic format conversion, methods performed by the apparatuses, and computer-readable recording medium with the dynamic format conversion recorded thereon
JP2020109968A (en) * 2019-01-04 2020-07-16 ハーマン インターナショナル インダストリーズ インコーポレイテッド Customized audio processing based on user-specific audio information and hardware-specific audio information

Similar Documents

Publication Publication Date Title
US10250960B2 (en) Sound reproduction device including auditory scenario simulation
CN109147815B (en) System and method for selective volume adjustment in a vehicle
US10142758B2 (en) System for and a method of generating sound
KR100921584B1 (en) Onboard music reproduction apparatus and music information distribution system
US9683884B2 (en) Selective audio/sound aspects
US8019454B2 (en) Audio processing system
US20070171788A1 (en) Audio data reproducing method and program therefor
TW200922272A (en) Automobile noise suppression system and method thereof
CN102640522A (en) Audio data processing device, audio device, audio data processing method, program, and recording medium that has recorded said program
KR20080052404A (en) Musical sound generating vehicular apparatus, musical sound generating method and computer readable recording medium having program
WO2024062757A1 (en) Information processing device, information processing system and information processing method
EP4115415A1 (en) Electronic device, method and computer program
CN113421564A (en) Voice interaction method, voice interaction system, server and storage medium
JP2019080188A (en) Audio system and vehicle
JP2007110481A (en) Audio controller and correcting device
KR101500177B1 (en) Audio system of vehicle
JP2008071058A (en) Device, method and program for reproducing sound
JP2006293697A5 (en)
CN115278484A (en) Audio stream control method, device, equipment and medium
JP2009043353A (en) Title giving device, title giving method, title giving program, and recording medium
US7873424B1 (en) System and method for optimizing digital audio playback
WO2022121617A1 (en) Karaoke method, vehicle-mounted terminal, and vehicle
US20220210593A1 (en) Combining prerecorded and live performances in a vehicle
JP7423156B2 (en) Audio processing device and audio processing method
JP2021018323A (en) Information providing device, information providing method, and program