WO2018101429A1 - Information processing device, information collection method, and program - Google Patents

Information processing device, information collection method, and program Download PDF

Info

Publication number
WO2018101429A1
WO2018101429A1 PCT/JP2017/043135 JP2017043135W WO2018101429A1 WO 2018101429 A1 WO2018101429 A1 WO 2018101429A1 JP 2017043135 W JP2017043135 W JP 2017043135W WO 2018101429 A1 WO2018101429 A1 WO 2018101429A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
sound
output
generated
vehicle
Prior art date
Application number
PCT/JP2017/043135
Other languages
French (fr)
Japanese (ja)
Inventor
洋人 河内
昭光 藤吉
洋一 奥山
Original Assignee
パイオニア株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パイオニア株式会社 filed Critical パイオニア株式会社
Publication of WO2018101429A1 publication Critical patent/WO2018101429A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01HMEASUREMENT OF MECHANICAL VIBRATIONS OR ULTRASONIC, SONIC OR INFRASONIC WAVES
    • G01H3/00Measuring characteristics of vibrations by using a detector in a fluid
    • G01H3/04Frequency
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B29/00Maps; Plans; Charts; Diagrams, e.g. route diagram

Definitions

  • the present invention relates to an information processing apparatus, an information collection method, and a program.
  • Patent Document 1 An example of a technique for collecting and utilizing various information during traveling of a vehicle is disclosed in, for example, Patent Document 1 and Patent Document 2 below.
  • Patent Document 1 sound generated when a vehicle travels over a joint for a bridge is collected using a microphone mounted on the vehicle, and the abnormality of the joint for the bridge is analyzed by analyzing the sound.
  • Techniques for detection are disclosed.
  • Patent Document 2 when a vibration measurement value greater than a predetermined threshold is detected by the vibration measurement means during travel of the vehicle, the road administrator uses the vehicle position information and the vibration measurement value at that time.
  • a technique for detecting a place that needs to be repaired is output.
  • JP 2011-242294 A Japanese Unexamined Patent Publication No. 2016-95184
  • the technology for utilizing map information can be improved by creating map information that has a large amount of information and is useful in various aspects.
  • a technique for creating such map information is desired.
  • Examples of problems to be solved by the present invention include providing a technique for creating map information that has a large amount of information and is useful in various aspects, and contributes to driving support for a moving object.
  • the invention described in claim 1 Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated An acquisition unit to acquire; Generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination; Is an information processing apparatus.
  • the invention according to claim 9 is: Computer Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated A process of acquiring; Generating output information in which the sound information and the position information are associated, and outputting the output information to a predetermined output destination; Is an information collection method including
  • the invention according to claim 10 is: Computer Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated Means for obtaining, and Means for generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination; It is a program to make it function as.
  • FIG. 6 is a diagram for explaining a flow of processing in which an acquisition unit specifies “position information of a vehicle when sound information is generated” based on a correspondence relationship between an acquisition time of vehicle position information and an acquisition time of sound information; is there.
  • FIG. 6 is a diagram for explaining a flow of processing in which an acquisition unit specifies “position information of a vehicle when sound information is generated” based on a correspondence relationship between an acquisition time of vehicle position information and an acquisition time of sound information; is there.
  • It is a block diagram which shows notionally the functional composition of the server apparatus of a 1st embodiment. It is a figure which shows an example of the table which correlates sound information and map information. It is a figure which illustrates the hardware constitutions of an information processing apparatus and a server apparatus.
  • It is a flowchart which illustrates the flow of the process performed by the information processing apparatus of 1st Embodiment. 12 is a flowchart illustrating another example of processing executed by the information processing apparatus. It is a flowchart which illustrates the flow of the process performed by the server apparatus of 1st Embodiment. It is a block diagram which shows notionally the function structure of the information processing apparatus in 2nd Embodiment. It is a figure which shows an example of the table which correlates sound analysis information and map information. It is a block diagram which shows notionally the function structure of the server apparatus for analysis in 3rd Embodiment. It is a figure which shows an example of the table which associates additional information and map information.
  • FIG. 1 It is a figure which illustrates the hardware constitutions of the server apparatus for analysis. It is a flowchart which illustrates the flow of the process which the server apparatus for analysis in 3rd Embodiment performs. It is a block diagram which shows notionally the function structure of the server apparatus for analysis in 4th Embodiment. It is a figure which shows an example of the drawing data which a display output part produces
  • each block in the block diagram represents a functional unit configuration, not a hardware unit configuration.
  • FIG. 1 is a block diagram conceptually illustrating a configuration example of the information collection system 1.
  • the dotted line in the figure represents a wired or wireless communication path.
  • the information collection system 1 includes an information processing device 10 connected to a sound collection device 30 mounted on each vehicle, and a server device 20 connected to the information processing device 10 of each vehicle. Consists of.
  • the configuration of the information collection system 1 is not limited to the example of FIG.
  • the information collection system 1 can be configured to include one or more information processing apparatuses 10 and one or more server apparatuses 20.
  • the case of the vehicle will be described as a specific example, but the present invention may be applied to a moving body other than the vehicle.
  • the information processing apparatus 10 is an apparatus mounted on a vehicle.
  • the information processing apparatus 10 can communicate with the server apparatus 20 by connecting to the network 50 via a wireless line such as 3G or LTE (Long Term Evolution), for example.
  • the information processing device 10 is, for example, an external device that can be attached to the inside or outside of a vehicle.
  • the information processing apparatus 10 may be an apparatus incorporated in a vehicle, such as an ECU (Electronic Control Unit). Further, the information processing apparatus 10 may be a portable terminal (for example, a smartphone or a tablet terminal) in which an application that realizes each function described below is installed.
  • the server device 20 is a device having a function of collecting information acquired by the information processing device 10.
  • the server device 20 can communicate with each of the information processing devices 10 mounted on each vehicle via the network 50.
  • the sound collecting device 30 is a device including a microphone or a microphone array that generates and outputs an electrical signal (sound information) corresponding to the collected sound wave.
  • the sound collecting device 30 is provided, for example, on the outer periphery of the vehicle.
  • the installation position and number of the sound collecting devices 30 are not particularly limited.
  • the sound collection device 30 can transmit sound information to the information processing device 10 by wireless connection or wired connection. Further, the information processing device 10 and the sound collecting device 30 may be integrally configured as one device.
  • FIG. 2 is a block diagram conceptually showing the functional configuration of the information processing apparatus 10 according to the first embodiment.
  • the information processing apparatus 10 according to the present embodiment includes an acquisition unit 12 and an output unit 14.
  • the acquisition unit 12 acquires sound information generated using the sound collection device 30 mounted on the vehicle.
  • the acquisition unit 12 acquires vehicle position information when the sound information is generated.
  • the acquisition unit 12 acquires position information from a GPS (Global Positioning System) module (not shown) or the like mounted on the vehicle in accordance with the timing at which sound information is acquired from the sound collection device 30.
  • the acquisition unit 12 acquires the position information of the surrounding radio base stations in accordance with the timing at which the sound information is acquired from the sound collection device 30, and calculates the position information of the vehicle using the position information of the radio base stations. May be.
  • the position information acquired in this way can be used as “position information of the vehicle when sound information is generated”.
  • the acquisition unit 12 determines that “when the sound information is generated is based on the correspondence between the time when the position information of the vehicle is acquired and the time when the sound information generated using the sound collection device 30 is acquired. It is also possible to specify “vehicle position information”.
  • FIG. 3 illustrates a flow of processing in which the acquisition unit 12 specifies “vehicle position information when sound information is generated” based on the correspondence between the acquisition time of the vehicle position information and the acquisition time of the sound information. It is a figure for doing.
  • the horizontal axis in FIG. 3 represents the time axis.
  • the graph to which the reference symbol S is given indicates sound information acquired by the acquisition unit 12 from the sound collection device 30. Further, acquisition unit 12, at time t A and time t B, is assumed to get the position information of the vehicle.
  • acquisition unit 12 based on the time of the correspondence relationship between the obtained position information of the acquired vehicle position information and the time t B of the vehicle at time t A, from the time t A to time t B it can be specified as the position information of the vehicle when the sound information S B obtained in the generated.
  • the acquisition unit 12 obtains position information having a width (position information including two points of the position at the time t A and the position at the time t B ) from the time t A to the time t B. it may be specified as the position information of the vehicle when the information S B is generated.
  • acquisition unit 12 between the position at time t A representative position between the up position at time t B (e.g., waypoints, etc.
  • the output unit 14 generates output information in which the sound information acquired by the acquisition unit 12 and the vehicle position information are linked to each other.
  • the output unit 14 outputs output information including sound information and vehicle position information to a predetermined output destination.
  • the predetermined output destination may be, for example, the server device 20 or a storage device (not shown) built in the information processing device 10.
  • FIG. 4 is a block diagram conceptually showing the functional configuration of the server apparatus 20 of the first embodiment. As illustrated in FIG. 4, the server device 20 includes a reception unit 22 and an association unit 24.
  • the receiving unit 22 receives output information output from the output unit 14 of the information processing apparatus 10 as input information.
  • the associating unit 24 stores the sound information included in the input information in a predetermined storage unit in association with the map information using the vehicle position information included in the input information received by the receiving unit 22.
  • the predetermined storage unit is, for example, an internal storage device (not shown) included in the server device 20 or an external storage device connected to the server device 20.
  • the associating unit 24 can associate sound information and map information using a table as shown in FIG.
  • FIG. 5 is a diagram showing an example of a table associating sound information with map information.
  • the table illustrated in FIG. 5 stores sound information for each piece of information indicating the position on the map (hereinafter referred to as “map position information”) stored in the “position on the map” column.
  • the map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information.
  • the map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
  • the associating unit 24 compares the position information included in the input information received by the receiving unit 22 with the map position information stored in the “position on the map” column, It is possible to specify a row (record) in which the sound information included is to be stored. Specifically, the associating unit 24 displays a line (record) that stores map position information that matches the position indicated by the position information included in the input information or that includes the position indicated by the position information. It can be specified as “row (record) to be stored”. Then, the associating unit 24 stores the sound information included in the input information received by the receiving unit 22 in the “sound information” column of the identified row (record). As described above, the associating unit 24 can associate the sound information with the map information by storing the sound information in association with the information indicating the position on the map.
  • the sound information generated by the sound collecting device 30 mounted on each vehicle and the position information of the vehicle when the sound information is generated are sent to the server device 20 in a state of being linked to each other. It is done.
  • the sound information includes various information that can indicate the situation when the sound information is generated. For example, if a person's voice is included in the sound information, it can be understood that a person exists around the vehicle. Further, for example, the age and sex of a person can be estimated from the feature amount (frequency distribution or the like) of the person's voice. For example, if the siren sound of an emergency vehicle is contained in sound information, it can be determined whether the emergency vehicle passed near the vehicle. For example, if the sound information includes an object collision sound or a sudden brake sound, it is possible to detect a contact accident or a sudden brake step around the vehicle.
  • the moving speed of the vehicle is estimated using the characteristic that the volume of the high frequency region increases as the vehicle moves at high speed.
  • An estimated value of the moving speed of the vehicle can be calculated from the sound information using a function to be derived.
  • the “location on the map” is specified based on the position information of the vehicle linked to the sound information, and the sound information is stored in association with the position on the map. Sound information is accumulated for each position on the map, so that characteristics of each position (for example, there are many / few pedestrians and bicycles, frequent / frequent passage of emergency vehicles, fast / slow vehicle movement speed) , Etc.). That is, according to the present embodiment, map information having high information density (added value) and useful in various aspects can be created.
  • Each functional component of the information processing device 10 and the server device 20 may be realized by hardware (for example, a hard-wired electronic circuit) that implements each functional component, or between hardware and software. It may be realized by a combination (for example, a combination of an electronic circuit and a program for controlling the electronic circuit).
  • a combination for example, a combination of an electronic circuit and a program for controlling the electronic circuit.
  • FIG. 6 is a diagram illustrating a hardware configuration of the information processing apparatus 10 and the server apparatus 20.
  • the information processing apparatus 10 includes a bus 102, a processor 104, a memory 106, a storage device 108, an input / output interface 110, and a network interface 112.
  • the bus 102 is a data transmission path through which the processor 104, the memory 106, the storage device 108, the input / output interface 110, and the network interface 112 transmit / receive data to / from each other.
  • the method of connecting the processors 104 and the like is not limited to bus connection.
  • the processor 104 is an arithmetic processing device realized using a microprocessor or the like.
  • the memory 106 is a memory realized using a RAM (Random Access Memory) or the like.
  • the storage device 108 is a storage device realized using a ROM (Read Only Memory), a flash memory, or the like.
  • the input / output interface 110 is an interface for connecting the information processing apparatus 10 to peripheral devices.
  • the input / output interface 110 is connected to a GPS module 1101 for acquiring information indicating the current position of the vehicle.
  • the information processing apparatus 10 can also acquire the position information of the surrounding base station via the network interface 112 and estimate the current position of the vehicle using the position information of the surrounding base station.
  • the GPS module 1101 may not be connected to the input / output interface 110.
  • the input / output interface 110 may be further connected to various input devices that accept input operations from a user, a display device, a touch panel in which they are integrated, and the like.
  • the network interface 112 is an interface for connecting the information processing apparatus 10 to a communication network.
  • the information processing apparatus 10 may have a plurality of network interfaces 112.
  • the information processing apparatus 10 includes a network interface 112 for connecting to a CAN communication network, a network interface 112 for connecting to a WAN (Wide Area Network) communication network, and a short-range wireless communication standard (for example, Bluetooth (registered) And a network interface 112 supporting the trademark).
  • the information processing apparatus 10 can communicate with the external server apparatus 20 via the WAN communication network and output output information to the server apparatus 20.
  • the information processing apparatus 10 can communicate with the sound collection device 30 by short-range wireless and acquire sound information generated by the sound collection device 30.
  • the information processing apparatus 10 can also acquire information indicating the operation of the vehicle (for example, the moving speed of the vehicle) via the CAN communication network.
  • the storage device 108 stores a program module for realizing each functional component of the information processing apparatus 10.
  • the processor 104 reads out the program module to the memory 106 and executes it, thereby realizing the function of each functional component of the information processing apparatus 10.
  • the server device 20 includes a bus 202, a processor 204, a memory 206, a storage device 208, an input / output interface 210, and a network interface 212.
  • the bus 202 is a data transmission path through which the processor 204, the memory 206, the storage device 208, the input / output interface 210, and the network interface 212 exchange data with each other.
  • the method of connecting the processors 204 and the like is not limited to bus connection.
  • the processor 204 is an arithmetic processing device realized using a microprocessor or the like.
  • the memory 206 is a memory realized using a RAM (Random Access Memory) or the like.
  • the storage device 208 is a storage device realized using a ROM (Read Only Memory), a flash memory, or the like.
  • the input / output interface 210 is an interface for connecting the server device 20 to peripheral devices.
  • the input / output interface 210 is connected to an input device such as a keyboard and a mouse, a display device such as an LCD (Liquid Crystal Display), a touch panel integrated with them.
  • the input device and the display device may be connected via a network interface 212 over a network.
  • the network interface 212 is an interface for connecting the server device 20 to a communication network.
  • the server device 20 includes a network interface 212 for connecting to a WAN (Wide Area Network) communication network.
  • the server device 20 can communicate with the information processing device 10 mounted on the vehicle via the WAN communication network and acquire output information from the information processing device 10 (input information for the server device 20).
  • the storage device 208 stores a program module for realizing each functional component of the server device 20.
  • the processor 204 reads out the program module to the memory 206 and executes it, thereby realizing the function of each functional component of the server device 20.
  • FIG. 7 is a flowchart illustrating the flow of processing executed by the information processing apparatus 10 according to the first embodiment.
  • the acquisition unit 12 communicates with the sound collection device 30 wirelessly or by wire to obtain sound information generated by the sound collection device 30 (S102).
  • the acquisition unit 12 may be configured to actively acquire sound information from the sound collection device 30 by, for example, notifying the sound collection device 30 of a transmission request for sound information, or transmitting sound information.
  • the sound information may be passively acquired from the sound collecting device 30 by monitoring the situation.
  • the acquisition unit 12 acquires the position information when the sound information is generated using the position information of the vehicle based on the GPS information of the GPS module 1101 and the position information of the surrounding base stations.
  • the acquisition unit 12 can acquire position information when sound information is generated by, for example, the method described with reference to FIG.
  • the output unit 14 associates the sound information acquired in S102 with the vehicle position information acquired in S104, and generates output information (S106). Then, the output unit 14 outputs the output information to the server device 20 (S108).
  • FIG. 8 is a flowchart illustrating another example of processing executed by the information processing apparatus 10. The flowchart in FIG. 8 is executed following the step S110 in the flowchart in FIG.
  • the output unit 14 temporarily stores the output information in a predetermined storage unit (for example, the storage device 108 of the information processing apparatus 10) (S110). And the output part 14 determines whether the transmission conditions of the output information accumulate
  • the transmission condition is, for example, when the number of output information accumulated in the storage device reaches a predetermined number or when the current time reaches a scheduled transmission time.
  • the output unit 14 When the transmission condition is not satisfied (S112: NO), the output unit 14 does not execute the process described later. Note that when the transmission condition is satisfied later, the output unit 14 performs processing described later.
  • the output unit 14 When the transmission condition is satisfied (S112: YES), the output unit 14 reads the output information stored in the predetermined storage unit and transmits it to the server device 20 (S114). Then, the output unit 14 deletes the intervention detection information transmitted in S114 from the predetermined storage unit (S116). Specifically, when the output unit 14 receives a confirmation signal indicating that the output information has been normally received from the server device 20, the output unit 14 deletes the transmitted output information from the predetermined storage unit. The output unit 14 may delete the output information record itself, or may logically delete the output information by adding a deletion flag to the output information. Further, the output unit 14 may add a flag to the output information that has already been transmitted, and delete the information to which the flag is assigned by batch processing that is periodically executed. The output unit 14 may be configured to retransmit the output information when receiving a signal indicating a reception error from the server device 20.
  • the acquisition unit 12 may further acquire information indicating the generation time of sound information.
  • the acquisition unit 12 can check the time managed in the information processing apparatus 10 in accordance with the timing at which the sound information is acquired, and can acquire the time at that time as the “sound information generation time”.
  • the sound collection device 30 acquires a time managed in the sound collection device 30 according to the timing at which the sound information is generated or transmitted, and transmits the time as “sound information generation time” in association with the sound information. May be.
  • the output unit 14 may further associate the generation time of the sound information with the output information and output it to the server device 20 or the storage device 108. In this way, a collection of sound information can be classified based on the generation time, and a more detailed analysis is possible.
  • the acquisition unit 12 may further acquire information (operation information) indicating the operation of the vehicle when sound information is generated via the CAN communication network.
  • operation information As an example of the operation information of the vehicle acquired by the acquisition unit 12, for example, the moving speed of the vehicle can be cited.
  • the output unit 14 may further associate the vehicle operation information with the output information and output it to the server device 20 or the storage device 108.
  • information that can be analyzed in more detail can be generated. For example, when wind information is included in the sound information, it can be determined whether or not the wind noise is generated by the movement of the vehicle based on the information on the moving speed of the vehicle, and the strength of the wind at the location can be estimated. .
  • the acquisition unit 12 may further acquire information (weather information) indicating the weather when the sound information is generated.
  • the acquisition unit 12 can access the Web server that distributes the weather information via the network interface 112 and acquire the weather information corresponding to the position indicated by the position information in S104.
  • the acquisition unit 12 may estimate the weather at the current position of the vehicle using sensing data obtained from various sensors (such as a raindrop sensor, an illuminance sensor, and an image sensor) mounted on the vehicle.
  • the acquisition part 12 may estimate the weather in the present position of a vehicle using the control signal of the vehicle which can be acquired via a CAN communication network. For example, when a control signal for operating the wiper is acquired, the acquisition unit 12 can generate information indicating that the weather is rainy.
  • the output unit 14 may further associate the weather information with the output information and output it to the server device 20 or the storage device 108. In this way, the collection of sound information can be classified based on the weather information, and a more detailed analysis is possible.
  • the acquisition unit 12 further displays the state of the driver when the sound information is generated, that is, the state information indicating how the driver has reacted with the external sound or the sound source of the external sound. You may get it.
  • the acquisition unit 12 acquires a driver's image from an in-vehicle camera mounted on the vehicle, and generates state information by analyzing the driver's state (facial expression, behavior, etc.) based on the acquired image.
  • state information is generated by analyzing the state of the driver (change in heart rate, etc.) from a biosensor worn by or in contact with the driver.
  • the acquisition unit 12 may acquire the driver's image and the output signal of the biosensor described above over, for example, several seconds to several tens of seconds immediately before, immediately after, or both of the timing when the sound information is generated. Note that past driver images and biosensor output signals are stored in, for example, the storage device 108.
  • the acquisition unit 12 can read the driver image immediately before the timing at which the sound information is generated or the output signal of the biosensor from the storage device 108 or the like.
  • the output unit 14 may further associate the state information generated in this way and output the state information to the server device 20 or the storage device 108. For example, it is assumed that a bicycle jumping out from a blind spot of a vehicle driver on a crossroad or the like suddenly brakes.
  • the sound collecting device 30 collects the sound of the sudden braking of the bicycle, and the acquisition unit 12 obtains sound information relating to the sudden braking of the bicycle generated by the sound collecting device 30.
  • the in-vehicle camera captures the driver's face image, and the biometric sensor acquires the driver's heart rate.
  • the facial image at this time has a surprised expression, and the heart rate suddenly increases.
  • the acquisition unit 12 acquires the image and the sensing result of the biological sensor from the in-vehicle camera and the biological sensor, and generates state information indicating that the driver is surprised. Also, it is assumed that a bicycle running alongside the vehicle applied a brake to wait for a signal in front of an intersection or the like.
  • the sound collecting device 30 collects the sudden braking sound, and the acquisition unit 12 obtains sound information relating to the sudden braking generated by the sound collecting device 30.
  • the sound collecting device 30 collects the sound of the bicycle brake, and the acquisition unit 12 obtains sound information relating to the bicycle brake generated by the sound collecting device 30.
  • the in-vehicle camera captures the driver's face image, and the biometric sensor acquires the driver's heart rate.
  • the facial image at this time has little change in facial expression before and after the bicycle brakes, and there is little change in heart rate.
  • the acquisition unit 12 acquires such an image and the sensing result of the biological sensor from the in-vehicle camera and the biological sensor, and generates state information indicating that there is little change in the driver's state even when the brake sound is heard.
  • both the in-vehicle camera and the biosensor are used, but one of them may be used, and the in-vehicle camera may take an image of the behavior of the driver instead of the driver's facial expression.
  • sensors such as an electroencephalogram sensor, a vibration sensor, and an in-vehicle microphone instead of the in-vehicle camera and the biological sensor, the driver's brain wave, the driver's behavior, the driver's behavior and the like may be acquired. Moreover, you may combine said several sensor suitably.
  • the driver's state is associated with the sound information and output to the server device 20 and the storage device 108, more detailed analysis is possible.
  • FIG. 9 is a flowchart illustrating the flow of processing executed by the server device 20 according to the first embodiment.
  • the receiving unit 22 acquires output information (sound information and vehicle position information) output from the information processing apparatus 10 as input information (S202).
  • the receiving unit 22 stores the acquired input information, for example, in a table (such as a table for storing input information as it is) different from the table shown in FIG. 5 (S204). This other table is prepared in advance on the storage device 208, for example.
  • the associating unit 24 groups the sound information stored in the other table based on the vehicle position information, and totals the number of acquired sound information for each group (S206). For example, the associating unit 24 can group the sound information for each position indicated by the position information of the vehicle associated with the sound information or for each area including the position.
  • the associating unit 24 selects one of the groups (S208), and determines whether or not the number of acquired sound information for each group is equal to or greater than a predetermined threshold (reference) (S210).
  • the predetermined threshold is defined on the program module of the associating unit 24 as, for example, a predetermined value corresponding to the reliability to be secured in later analysis. Specifically, the larger the threshold value serving as a reference, the more sound information is required to associate with the map information. As a result, since the number of information that can be used for analysis increases, the reliability of the analysis result can be improved.
  • the associating unit 24 determines whether there is a group that has not yet been selected (S214). If all groups have been selected (S214: NO), the server device 20 ends the process. On the other hand, when there is a group that has not been selected yet (S214: YES), the associating unit 24 selects the group (S208), and determines whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold. It is determined again (S210).
  • the associating unit 24 associates the sound information belonging to that group with the map information using the vehicle position information (S212). ).
  • the associating unit 24 can specify the position on the map to which the sound information should be associated using the vehicle position information associated with each sound information.
  • the associating unit 24 can identify the row (record) to which the sound information of each group should be associated with reference to the “position on the map (map position information)” in the table as shown in FIG. . Then, the associating unit 24 can associate each sound information with the map information by storing the sound information of the group described above in the “sound information” column of the identified row (record).
  • the associating unit 24 determines whether there is a group not yet selected (S214). If all groups have been selected (S214: NO), the server device 20 ends the process. On the other hand, when there is a group that has not been selected yet (S214: YES), the associating unit 24 selects the group (S208), and determines whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold. The determination is made again (S214).
  • the associating unit 24 may store the information together with the sound information. In this way, more detailed information can be obtained from a collection of sound information associated with map information.
  • FIG. 10 is a block diagram conceptually showing the functional configuration of the information processing apparatus 10 in the second embodiment. As illustrated in FIG. 10, the information processing apparatus 10 according to the present embodiment further includes a sound analysis unit 16.
  • the sound analysis unit 16 analyzes the sound information acquired by the acquisition unit 12 and generates sound analysis information.
  • the sound analysis unit 16 can generate sound analysis information indicating the volume of the sound information based on the sound pressure level of the sound information (electric signal), for example. Further, for example, the sound analysis unit 16 decomposes the sound information into frequency components using frequency analysis software and the like, and the sound analysis information indicating the volume for each frequency based on the sound pressure level of the sound information for each frequency component May be generated.
  • the sound analysis unit 16 may generate sound analysis information indicating a sound source using a known sound source separation algorithm and sound source identification algorithm.
  • the sound analysis unit 16 uses these algorithms to generate, for example, a person's voice, a vehicle running sound (eg, engine sound, road noise), an emergency vehicle siren sound, a sudden brake sound, a bicycle chain sound, and the like. , It can be extracted from the sound information.
  • the sound analysis unit 16 indicates the age and sex of a person when the sound source is a person using a dictionary database generated based on the analysis result (formant distribution, etc.) of the sound sample for each age and sex. Sound analysis information can also be generated.
  • the output unit 14 of the present embodiment generates output information in which the sound analysis information generated from the sound information by the analysis of the sound analysis unit 16 and the vehicle position information acquired by the acquisition unit 12 are linked. And output to a predetermined output destination (for example, the storage device 108 of the information processing apparatus 10 and the server apparatus 20).
  • the receiving unit 22 of the present embodiment receives output information including sound analysis information output from the output unit 14 of the information processing apparatus 10 as input information.
  • sound analysis information includes volume level analysis results (total volume level and volume level analysis results), person (adult / child) voice detection results, and bicycle chain sound detection.
  • the result includes the detection result of the siren sound of the emergency vehicle, the detection result of the sudden brake sound, the detection result of the collision sound, and the like.
  • the association unit 24 of the present embodiment stores the sound analysis information included in the input information in association with the map information using the vehicle position information included in the input information.
  • the associating unit 24 can compare the position information of the vehicle included in the input information with the position information on the map, and associate the sound analysis information with the map information based on the comparison result.
  • the associating unit 24 can associate sound analysis information and map information using a table as shown in FIG.
  • FIG. 11 is a diagram showing an example of a table for associating sound analysis information with map information.
  • the table illustrated in FIG. 11 stores sound analysis information for each information map position information indicating the position on the map stored in the “position on the map” column.
  • the map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information.
  • the map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
  • the associating unit 24 compares the position information included in the input information received by the receiving unit 22 with the map position information stored in the “position on the map” column, A row (record) in which the included sound analysis information is to be stored can be specified. Specifically, the associating unit 24 displays a line (record) that stores map position information that matches the position indicated by the position information included in the input information or that includes the position indicated by the position information as “sound analysis information”. Can be specified as “row (record) to be stored”. Then, the associating unit 24 stores the sound analysis information included in the input information received by the receiving unit 22 in the “sound analysis information” column of the identified row (record).
  • the association unit 24 can associate the sound analysis information with the map information by storing the sound analysis information in association with the information indicating the position on the map. Further, in the case where the sound information that is the basis of the sound analysis information is acquired from the information processing apparatus 10, the associating unit 24 may further store the sound information in association with it. In this case, a column of “sound information” is further provided in the table of FIG.
  • the information processing apparatus 10 has a hardware configuration as shown in FIG.
  • the storage device 108 of this embodiment further stores a program module for realizing the sound analysis unit 16.
  • the processor 104 further realizes the function of the sound analysis unit 16 by reading the program module into the memory 106 and executing it.
  • the analysis of the sound information generated by the sound collection device 30 is executed by the information processing device 10 of each vehicle, and the sound analysis information is included in the output information.
  • the load concerning the analysis process of sound information in the server apparatus 20 can be reduced.
  • FIG. 12 is a block diagram conceptually showing the functional structure of the analysis server device 40 in the third embodiment.
  • the analysis server device 40 of this embodiment includes an acquisition unit 42 and an additional information generation unit 44.
  • the analysis server device 40 may be the server device 20 in each of the above-described embodiments, or may be another device provided separately from the server device 20.
  • the acquisition unit 42 acquires sound information generated using a sound collecting device mounted on the vehicle and position information associated with the sound information.
  • the position information associated with the sound information is, for example, “position information of the vehicle when the sound information is generated” or “map position information” described in the above embodiments.
  • the acquisition unit 42 can acquire sound information and vehicle position information from the information processing apparatus 10 described in the above embodiments.
  • the information processing apparatus 10 produces
  • the acquisition part 42 can acquire sound analysis information and the positional information on a vehicle.
  • the additional information generation unit 44 does not have to analyze the sound information.
  • the server device 20 or the like described in each of the above-described embodiments accumulates sound information and vehicle position information acquired from the information processing device 10 in a predetermined table, the acquisition unit 42 refers to the table. Thus, sound information and vehicle position information can be acquired.
  • the acquisition unit 42 can acquire the sound analysis information and the vehicle position information.
  • the additional information generation unit 44 does not have to analyze the sound information.
  • the additional information generation unit 44 refers to a table as shown in FIG. 5 or FIG. 11, for example, and acquires sound information or sound analysis information and map position information associated with these information. Can do.
  • the additional information generation unit 44 does not have to analyze the sound information when referring to the table of FIG.
  • the additional information generation unit 44 generates additional information generated using the analysis result of the sound information acquired by the acquisition unit 42.
  • the additional information generation unit 44 generates at least one of a sound information volume analysis result, a sound information frequency band analysis result, and a sound information sound source analysis result generated by analyzing the sound information.
  • additional information can be generated as follows using sound analysis information including: In the following, an example in which the additional information generation unit 44 analyzes sound information will be described.
  • the additional information generation unit 44 may be configured to acquire various analysis results of sound information by other processing units.
  • the additional information generation unit 44 analyzes the sound information acquired by the acquisition unit 42 in the same manner as the sound analysis unit 16 of the second embodiment, and generates information (sound analysis information) indicating the analysis result.
  • the additional information generation unit 44 analyzes the analysis result of the sound information volume (total volume or volume by frequency) and the analysis result of the sound information sound source (for example, a person, a bicycle, an emergency vehicle, a sudden brake, etc.). Sound analysis information including at least one of them is generated.
  • the additional information generation unit 44 takes statistics of the analysis result indicated by the sound analysis information, and generates additional information based on the statistics. For example, the additional information generation unit 44 can calculate additional information indicating a feature that the volume level is large / small based on the average value, median value, mode value, etc. of the volume level of the sound information, which can be calculated from the sound analysis information. Can be generated. In this case, the additional information generation unit 44 can determine the level of the volume level by comparing these values with a predetermined threshold value that determines that the volume level is high or low. The threshold is defined by a program module of the additional information generation unit 44 or the like.
  • the additional information generation unit 44 has a high / low number of pedestrians and bicycles, a high / low frequency of passing emergency vehicles, and a high vehicle moving speed based on the detection frequency of each sound source included in the sound analysis information. Additional information indicating characteristics such as / slow can be generated. For example, the additional information generation unit 44 compares the threshold value for determining the level of detection frequency (high or low) with the number of detections for each sound source, thereby detecting the level of detection frequency (high or low) for each sound source. ) Can be determined.
  • the threshold is defined by a program module of the additional information generation unit 44 or the like.
  • the threshold value may be defined as a different value for each sound source, or may be defined as a value common to all sound sources.
  • the additional information generation unit 44 is characterized in that the average moving speed of the vehicle is fast / slow based on the average value, median value, mode value, etc. of the speed that can be estimated from the frequency distribution of road noise and engine sound. Can be generated.
  • the additional information generation unit 44 (1) totals the number of sound information for each group classified by the position information associated with the sound information, and (2) the number of sound information for each group is equal to or greater than a reference threshold. (3) Additional information may be generated using the analysis result of sound information belonging to the selected group.
  • the additional information generation unit 44 associates the additional information with the position on the map information specified by using the position information acquired by the acquisition unit 42.
  • the additional information generation unit 44 can associate sound information and map information using a table as shown in FIG.
  • FIG. 13 is a diagram illustrating an example of a table for associating additional information with map information.
  • the table illustrated in FIG. 13 stores additional information for each map position information.
  • the map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information.
  • the map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
  • the additional information generation unit 44 compares the position information acquired by the acquisition unit 42 with the map position information stored in the “position on the map” column, and stores the generated additional information.
  • the power line (record) can be specified. Specifically, the additional information generation unit 44 displays a row (record) that stores map position information that matches the position indicated by the position information acquired by the acquisition unit 42 or includes the position indicated by the position information. It can be specified as “row (record) in which additional information is to be stored”. Then, the additional information generation unit 44 stores the additional information in the “sound information” column of the identified row (record). As described above, the additional information generation unit 44 can associate the additional information with the map information by storing the additional information in association with the information indicating the position on the map.
  • sound information generated by the sound collection device 30 mounted on each vehicle is acquired, and additional information to be added to the map information is generated using the analysis result of the sound information.
  • This additional information is, for example, information indicating characteristics such as high / low volume level, high / low pedestrians and bicycles, high / low frequency of emergency vehicles, and high / low vehicle moving speed.
  • a “position on the map” is specified based on the position information associated with the sound information, and additional information is associated with the position on the map.
  • map information useful in various aspects can be created.
  • Each function component of the analysis server device 40 may be realized by hardware (for example, a hard-wired electronic circuit) that realizes each function component, or a combination of hardware and software (example) : A combination of an electronic circuit and a program for controlling the electronic circuit).
  • hardware for example, a hard-wired electronic circuit
  • software for example: A combination of an electronic circuit and a program for controlling the electronic circuit.
  • FIG. 14 is a diagram illustrating a hardware configuration of the analysis server device 40.
  • the analysis server device 40 includes a bus 402, a processor 404, a memory 406, a storage device 408, an input / output interface 410, and a network interface 412.
  • the bus 402 is a data transmission path through which the processor 404, the memory 406, the storage device 408, the input / output interface 410, and the network interface 412 transmit / receive data to / from each other.
  • the method of connecting the processors 404 and the like is not limited to bus connection.
  • the processor 404 is an arithmetic processing unit realized using a microprocessor or the like.
  • the memory 406 is a memory realized using a RAM (Random Access Memory) or the like.
  • the storage device 408 is a storage device realized by using a ROM (Read Only Memory), a flash memory, or the like.
  • the input / output interface 410 is an interface for connecting the analysis server device 40 to peripheral devices.
  • the input / output interface 410 is connected to an input device such as a keyboard and a mouse, a display device such as an LCD (Liquid Crystal Display), a touch panel integrated with them.
  • the input device and the display device may be connected via a network interface 412 over a network.
  • the network interface 412 is an interface for connecting the analysis server device 40 to a communication network.
  • the analysis server device 40 has a network interface 412 for connecting to a WAN (Wide Area Network) communication network.
  • WAN Wide Area Network
  • the storage device 408 stores a program module for realizing each functional component of the analysis server device 40.
  • the processor 404 reads out the program module to the memory 406 and executes it, thereby realizing the functions of the functional components of the analysis server device 40.
  • FIG. 15 is a flowchart illustrating the flow of processing executed by the analysis server device 40 according to the third embodiment.
  • the analysis server device 40 generates additional information with reference to a table as illustrated in FIG.
  • the acquisition unit 42 acquires sound information and position information (map position information) associated with the sound information (S302). Specifically, the acquisition unit 42 reads the sound information stored in each record and the map position information associated with the sound information with reference to a table as shown in FIG.
  • the additional information generation unit 44 groups sound information based on the position information acquired in S302, and totals the number of sound information of each group (S304). When referring to the table of FIG. 5, the sound information is grouped by map position information. The additional information generation unit 44 adds up the number of sound information for each group based on the map position information.
  • the additional information generation unit 44 selects one group (S306), and determines whether or not the number of pieces of sound information counted for each group is equal to or greater than a predetermined threshold (reference) (S308).
  • the predetermined threshold is defined on the program module of the additional information generation unit 44 as a predetermined value corresponding to the reliability required for the additional information generated based on the analysis result of the sound information, for example. Specifically, as the reference threshold value is larger, more sound information is used when generating additional information. As a result, the reliability of the additional information can be improved by using the analysis results of the plurality of sound information.
  • the additional information generation unit 44 determines whether there is a group that has not yet been selected (S316). If all the groups have been selected (S316: NO), the analysis server device 40 ends the process. On the other hand, if there is a group that has not yet been selected (S316: YES), the additional information generation unit 44 selects the group (S306), and whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold value. It is determined again (S308).
  • the additional information generation unit 44 analyzes each piece of sound information belonging to the group and obtains sound analysis information (for example, a person, Information indicating detection results such as a bicycle, an emergency vehicle, and a sudden brake sound is generated (S310). Further, the additional information generation unit 44 adds additional information (for example, there are many / few pedestrians and bicycles, frequent / frequent passage of emergency vehicles, based on statistics of analysis results indicated by the sound analysis information, vehicle moving speed) Is fast / slow, etc.) (S312).
  • sound analysis information for example, a person, Information indicating detection results such as a bicycle, an emergency vehicle, and a sudden brake sound is generated (S310). Further, the additional information generation unit 44 adds additional information (for example, there are many / few pedestrians and bicycles, frequent / frequent passage of emergency vehicles, based on statistics of analysis results indicated by the sound analysis information, vehicle moving speed) Is fast / slow, etc.
  • the additional information generation unit 44 associates the additional information with the map information by using the position information associated with the sound information belonging to the selected group (S314). For example, the additional information generation unit 44 associates the map position information read in S302 with the additional information generated in S312 and generates a table as shown in FIG. Can be associated.
  • the additional information generation unit 44 determines whether there is a group that has not yet been selected (S316). If all the groups have been selected (S316: NO), the analysis server device 40 ends the process. On the other hand, if there is a group that has not yet been selected (S316: YES), the additional information generation unit 44 selects the group (S306), and whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold value. It is determined again (S308).
  • the additional information generation unit 44 can also generate more detailed additional information using these information.
  • the additional information generation unit 44 can generate additional information indicating characteristics by time zone or weather by further using time information and weather information.
  • the additional information generation unit 44 can include wind strength that can be estimated by further using the operation information in the additional information.
  • the state information is added to the additional information.
  • state information indicating that the driver was surprised is output from the output unit 14 in association with sound information indicating sudden braking of the bicycle.
  • the additional information generation unit generates additional information indicating that the driver who has heard the sudden braking at this position is surprised, and adds the additional information to the map information.
  • state information indicating that there is no change in the driver's state is output from the output unit 14 in association with the sound information indicating the bicycle brake.
  • the additional information generation unit generates additional information indicating that there is little change in the state of the driver even if the brake sound of the bicycle is heard at this position, and adds the additional information to the map information. Since the map generated in this way is used by other vehicles for driving the vehicle, it is possible to predict in advance what kind of reaction the driver will react with the external sound generated at the destination. It is possible to perform vehicle control based on the above.
  • the analysis server device 40 of this embodiment is the same as that of the third embodiment except for the following points.
  • FIG. 16 is a block diagram conceptually showing the functional structure of the analysis server device 40 in the fourth embodiment. As shown in FIG. 16, the analysis server device 40 of the present embodiment further includes a display output unit 46 in addition to the configuration of the third embodiment.
  • the display output unit 46 acquires range information that specifies the display range of the map information, and reads additional information associated with the range indicated by the range information.
  • the display output unit 46 is, for example, an input device connected to the analysis server device 40 or another device connected to the analysis server device 40 via the network interface 412 (for example, a navigation device mounted on a vehicle,
  • the range information that specifies the display range of the map information can be acquired from a user PC (Personal Computer) or the like.
  • the display output unit 46 generates and outputs drawing data to be displayed overlaid on the map information based on the read additional information.
  • the output destination of the drawing data is a display device connected to the analysis server device 40 or another device connected via the network interface 412.
  • the drawing data is used to visualize additional information.
  • the drawing data is data for drawing a volume level distribution map, a sound source distribution map, a vehicle average moving speed distribution map, and the like.
  • the display output unit 46 generates drawing data for displaying information as exemplified in FIGS. 17 to 22, for example.
  • FIGS. 17 to 22 are diagrams illustrating an example of the drawing data generated by the display output unit 46.
  • FIG. 17 shows an example in which the display output unit 46 generates and outputs drawing data indicating the volume level distribution in the designated display range.
  • the display output unit 46 reads the additional information stored for each position on the map with reference to, for example, a table as shown in FIG. Then, the display output unit 46 specifies the volume level for each position based on the additional information for each position on the map (eg, the volume level is large / small). Then, the display output unit 46 connects the volume levels for each position to generate drawing data indicating the distribution of the volume levels. Then, for example, a screen as shown in FIG. 17 is displayed by the drawing data generated by the display output unit 46. With such a screen, a person browsing the screen can easily grasp the volume level (that is, the noise level) for each place.
  • the volume level that is, the noise level
  • the display output unit 46 displays information for designating the frequency band via an input field as indicated by reference numeral 170 in FIG. Further, it may be obtained, and drawing data indicating a volume level distribution in a specified frequency band may be generated and output. As a result, it is possible to easily grasp which frequency band of sound is generated at what volume at each location.
  • the volume for each frequency band is useful information for knowing the characteristics of the place. For example, a place with a high volume in the low range may be inferred as a place where low frequency sounds or very low frequency sounds may be emitted at a reasonable frequency, which may cause discomfort or damage to health. it can. Further, for example, it can be estimated that a place where the volume of the high frequency is high is a place where a high frequency sound such as a construction metal sound or a bicycle brake sound is likely to occur frequently.
  • the display output unit 46 distributes the frequency band of the sound information in the range indicated by the range information. It is also possible to output drawing data indicating.
  • the additional information generation unit 44 can generate additional information indicating a statistical value (for example, an average value, an intermediate value, etc.) of the frequency from the analysis result of the frequency of the sound information collected at each point.
  • the display output unit 46 can generate drawing data indicating the frequency band distribution of the sound information based on the additional information generated as described above. In this case, for example, an item “frequency distribution” is added to the input field for selecting the display information of FIG.
  • the frequency band distribution displayed in this way is also useful information for estimating the characteristics of the place.
  • the display output unit 46 generates and outputs drawing data indicating the distribution of the detection frequency of the sound source in the designated display range.
  • the display output unit 46 reads the additional information stored for each position on the map with reference to, for example, a table as shown in FIG. Then, the display output unit 46 adds additional information for each position on the map (eg, there are many / few pedestrians, many / few bicycles, high / low emergency vehicle passing frequency, high frequency of sudden braking / For each position based on whether or not a sound source with a high detection frequency exists. And the display output part 46 produces
  • a screen as shown in FIG. 18 is displayed by the drawing data generated by the display output unit 46.
  • This screen makes it easy for the person viewing the screen to find places that require attention, such as places where there are many pedestrians and bicycles, places where emergency vehicles frequently pass, and places where sudden braking frequently occurs. Can do.
  • the display output unit 46 further acquires, for example, information for designating the type of the sound source via the input field as indicated by reference numeral 180 in FIG. 18, and draws the drawing data indicating the distribution of the detection frequency of the designated type of sound source. It may be generated and output.
  • the input field is not limited to the example of FIG. 18 and may be configured such that a plurality of types can be specified, such as a check box. By narrowing down the information displayed on the screen in this way, it becomes easier for a person browsing the screen to find desired information.
  • drawing data indicating whether the sound source affects the driver is generated and output.
  • the display output unit 46 reads, for example, additional information indicating the driver's state stored for each position on the map. And the display output part 46 produces
  • an icon 191 and an icon 192 are shown as an example of drawing data indicating whether the sound source affects the driver.
  • the icon 191 is an icon indicating that the sound source does not affect the driver or that the sound source has little effect on the driver.
  • the icon 192 is an icon indicating that the sound source may affect the driver.
  • the “drawing data indicating whether the sound source affects the driver” displayed by the display output unit 46 is not limited to the example of FIG.
  • the display output unit 46 may display icons 201 and 202 that indicate changes in the facial expression of a person.
  • the display output unit 46 may display icons 211 and 212 indicating changes in heart rate.
  • the degree of influence on the driver is set to three or more levels, and icons corresponding to the levels may be displayed.
  • the display output unit 46 can specify the icon to be displayed based on the stage indicated by the additional information for each position on the map.
  • the screens illustrated in FIGS. 19 to 21 allow a person who views the screen to easily understand whether the sound source that appears in the position where he / she will travel from affects the driver himself / herself. Can contribute to driving support.
  • the display output unit 46 generates and outputs drawing data indicating the distribution of the average speed of the vehicle in the designated display range.
  • the display output unit 46 reads additional information for each position on the map with reference to, for example, a table as shown in FIG. And the display output part 46 specifies the average speed of the vehicle for every position based on the additional information (Example: Average speed of a vehicle is quick / slow) stored for every position on a map. And the display output part 46 connects the average speed of the vehicle for every position, and produces
  • the server device for analysis 40 of this embodiment has a hardware configuration as shown in FIG.
  • the storage device 408 of this embodiment further stores a program module for realizing the display output unit 46.
  • the processor 404 reads out the program module of the display output unit 46 to the memory 406 and executes it, thereby realizing the function of the display output unit 46.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Business, Economics & Management (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

An information collection system (1) is provided with an information processing device (10) and a server device (20). The information processing device (10) acquires sound information generated by a sound collection device (30) mounted in a mobile entity together with information that indicates the position of the mobile entity at the time the sound information was generated. The server device (20) acquires, from the information processing device (10), and stores the sound information and the position information of the mobile entity at the time the sound information was generated.

Description

情報処理装置、情報収集方法、及びプログラムInformation processing apparatus, information collection method, and program
 本発明は、情報処理装置、情報収集方法、及びプログラムに関する。 The present invention relates to an information processing apparatus, an information collection method, and a program.
 車両の走行中に様々な情報を収集して活用する技術の一例が、例えば、下記特許文献1および下記特許文献2に開示されている。下記特許文献1には、橋梁用の継手の上を車両で走行した際に生じる音を当該車両に搭載されたマイクを用いて収集し、その音を解析することによって橋梁用の継手の異常を検知する技術が開示されている。また、下記特許文献2には、車両の走行時に振動計測手段によって所定の閾値以上の振動測定値が検出された場合に、そのときの車両の位置情報と振動測定値とを道路管理者が運用する装置に出力し、補修が必要な場所を検出する技術が開示されている。 An example of a technique for collecting and utilizing various information during traveling of a vehicle is disclosed in, for example, Patent Document 1 and Patent Document 2 below. In Patent Document 1 below, sound generated when a vehicle travels over a joint for a bridge is collected using a microphone mounted on the vehicle, and the abnormality of the joint for the bridge is analyzed by analyzing the sound. Techniques for detection are disclosed. Further, in Patent Document 2 below, when a vibration measurement value greater than a predetermined threshold is detected by the vibration measurement means during travel of the vehicle, the road administrator uses the vehicle position information and the vibration measurement value at that time. A technique for detecting a place that needs to be repaired is output.
特開2011-242294号公報JP 2011-242294 A 特開2016-95184号公報Japanese Unexamined Patent Publication No. 2016-95184
 情報量が多く様々な面で有用な地図情報を作成することにより、地図情報を活用する技術を向上させることができる。このような地図情報を作成する技術が望まれる。 The technology for utilizing map information can be improved by creating map information that has a large amount of information and is useful in various aspects. A technique for creating such map information is desired.
 本発明が解決しようとする課題としては、情報量が多く様々な面で有用な地図情報を作成する技術を提供し、また、移動体の運転支援に寄与することが一例として挙げられる。 Examples of problems to be solved by the present invention include providing a technique for creating map information that has a large amount of information and is useful in various aspects, and contributes to driving support for a moving object.
 請求項1に記載の発明は、
 移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する取得部と、
 前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する出力部と、
 を備える情報処理装置である。
The invention described in claim 1
Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated An acquisition unit to acquire;
Generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination;
Is an information processing apparatus.
 請求項9に記載の発明は、
 コンピュータが、
 移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する工程と、
 前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する工程と、
 を含む情報収集方法である。
The invention according to claim 9 is:
Computer
Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated A process of acquiring;
Generating output information in which the sound information and the position information are associated, and outputting the output information to a predetermined output destination;
Is an information collection method including
 請求項10に記載の発明は、
 コンピュータを、
 移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する手段、及び、
 前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する手段、
 として機能させるためのプログラムである。
The invention according to claim 10 is:
Computer
Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated Means for obtaining, and
Means for generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination;
It is a program to make it function as.
 上述した目的、およびその他の目的、特徴および利点は、以下に述べる好適な実施の形態、およびそれに付随する以下の図面によってさらに明らかになる。 The above-described object and other objects, features, and advantages will be further clarified by a preferred embodiment described below and the following drawings attached thereto.
情報収集システムの構成例を概念的に示すブロック図である。It is a block diagram which shows notionally the structural example of an information collection system. 第1実施形態の情報処理装置の機能構成を概念的に示すブロック図である。It is a block diagram which shows notionally the functional composition of the information processor of a 1st embodiment. 取得部が車両の位置情報の取得時刻と音情報の取得時刻との対応関係に基づいて「音情報が生成されたときの車両の位置情報」を特定する処理の流れを説明するための図である。FIG. 6 is a diagram for explaining a flow of processing in which an acquisition unit specifies “position information of a vehicle when sound information is generated” based on a correspondence relationship between an acquisition time of vehicle position information and an acquisition time of sound information; is there. 第1実施形態のサーバ装置の機能構成を概念的に示すブロック図である。It is a block diagram which shows notionally the functional composition of the server apparatus of a 1st embodiment. 音情報と地図情報とを関連付けるテーブルの一例を示す図である。It is a figure which shows an example of the table which correlates sound information and map information. 情報処理装置およびサーバ装置のハードウエア構成を例示する図である。It is a figure which illustrates the hardware constitutions of an information processing apparatus and a server apparatus. 第1実施形態の情報処理装置により実行される処理の流れを例示するフローチャートである。It is a flowchart which illustrates the flow of the process performed by the information processing apparatus of 1st Embodiment. 情報処理装置により実行される処理の他の例を示すフローチャートである。12 is a flowchart illustrating another example of processing executed by the information processing apparatus. 第1実施形態のサーバ装置により実行される処理の流れを例示するフローチャートである。It is a flowchart which illustrates the flow of the process performed by the server apparatus of 1st Embodiment. 第2実施形態における情報処理装置の機能構成を概念的に示すブロック図である。It is a block diagram which shows notionally the function structure of the information processing apparatus in 2nd Embodiment. 音解析情報と地図情報とを関連付けるテーブルの一例を示す図である。It is a figure which shows an example of the table which correlates sound analysis information and map information. 第3実施形態における解析用サーバ装置の機能構成を概念的に示すブロック図である。It is a block diagram which shows notionally the function structure of the server apparatus for analysis in 3rd Embodiment. 付加情報と地図情報とを関連付けるテーブルの一例を示す図である。It is a figure which shows an example of the table which associates additional information and map information. 解析用サーバ装置のハードウエア構成を例示する図である。It is a figure which illustrates the hardware constitutions of the server apparatus for analysis. 第3実施形態における解析用サーバ装置が実行する処理の流れを例示するフローチャートである。It is a flowchart which illustrates the flow of the process which the server apparatus for analysis in 3rd Embodiment performs. 第4実施形態における解析用サーバ装置の機能構成を概念的に示すブロック図である。It is a block diagram which shows notionally the function structure of the server apparatus for analysis in 4th Embodiment. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates. 表示出力部が生成する描画データの一例を示す図である。It is a figure which shows an example of the drawing data which a display output part produces | generates.
 以下、本発明の実施の形態について、図面を用いて説明する。尚、すべての図面において、同様な構成要素には同様の符号を付し、適宜説明を省略する。また、特に説明する場合を除き、ブロック図における各ブロックは、ハードウエア単位の構成ではなく、機能単位の構成を表している。 Hereinafter, embodiments of the present invention will be described with reference to the drawings. In all the drawings, the same reference numerals are given to the same components, and the description will be omitted as appropriate. Further, unless otherwise specified, each block in the block diagram represents a functional unit configuration, not a hardware unit configuration.
 [第1実施形態]
 〔システム構成〕
 図1は、情報収集システム1の構成例を概念的に示すブロック図である。なお、図中の点線は有線または無線の通信路を表している。図1の例において、情報収集システム1は、各車両に搭載される集音装置30に接続された情報処理装置10と、各車両の情報処理装置10に接続されるサーバ装置20と、を含んで構成される。なお、情報収集システム1の構成は図1の例に限定されない。情報収集システム1は、1以上の情報処理装置10と1以上のサーバ装置20とを含んで構成され得る。なお、本明細書では、車両のケースを具体的な例として挙げて説明するが、車両以外の他の移動体に本発明を適用してもよい。
[First Embodiment]
〔System configuration〕
FIG. 1 is a block diagram conceptually illustrating a configuration example of the information collection system 1. The dotted line in the figure represents a wired or wireless communication path. In the example of FIG. 1, the information collection system 1 includes an information processing device 10 connected to a sound collection device 30 mounted on each vehicle, and a server device 20 connected to the information processing device 10 of each vehicle. Consists of. The configuration of the information collection system 1 is not limited to the example of FIG. The information collection system 1 can be configured to include one or more information processing apparatuses 10 and one or more server apparatuses 20. In the present specification, the case of the vehicle will be described as a specific example, but the present invention may be applied to a moving body other than the vehicle.
 情報処理装置10は、車両に搭載される装置である。情報処理装置10は、例えば3GやLTE(Long Term Evolution)などの無線回線を介してネットワーク50に接続し、サーバ装置20と通信することができる。情報処理装置10は、例えば、車両の室内または室外に取り付け可能な外付け型の装置である。また、情報処理装置10は、例えばECU(Electronic Control Unit)といった、車両に組み込まれる装置であってもよい。また、情報処理装置10は、後述する各機能を実現するアプリケーションがインストールされた携帯型端末(例:スマートフォンやタブレット端末)であってもよい。 The information processing apparatus 10 is an apparatus mounted on a vehicle. The information processing apparatus 10 can communicate with the server apparatus 20 by connecting to the network 50 via a wireless line such as 3G or LTE (Long Term Evolution), for example. The information processing device 10 is, for example, an external device that can be attached to the inside or outside of a vehicle. The information processing apparatus 10 may be an apparatus incorporated in a vehicle, such as an ECU (Electronic Control Unit). Further, the information processing apparatus 10 may be a portable terminal (for example, a smartphone or a tablet terminal) in which an application that realizes each function described below is installed.
 サーバ装置20は、情報処理装置10が取得した情報を収集する機能を備える装置である。サーバ装置20は、ネットワーク50を介して、各車両に搭載される情報処理装置10それぞれと通信することができる。 The server device 20 is a device having a function of collecting information acquired by the information processing device 10. The server device 20 can communicate with each of the information processing devices 10 mounted on each vehicle via the network 50.
 集音装置30は、収集した音波に応じた電気信号(音情報)を生成して出力するマイクロフォンやマイクロフォンアレイなどを備える装置である。集音装置30は、例えば車両の外周部などに設けられる。集音装置30の設置位置や数は特に限定されない。集音装置30は、無線接続または有線接続によって、情報処理装置10に音情報を送信することができる。また、情報処理装置10と集音装置30は、1つの装置として一体的に構成されていてもよい。 The sound collecting device 30 is a device including a microphone or a microphone array that generates and outputs an electrical signal (sound information) corresponding to the collected sound wave. The sound collecting device 30 is provided, for example, on the outer periphery of the vehicle. The installation position and number of the sound collecting devices 30 are not particularly limited. The sound collection device 30 can transmit sound information to the information processing device 10 by wireless connection or wired connection. Further, the information processing device 10 and the sound collecting device 30 may be integrally configured as one device.
 〔情報処理装置10の機能構成〕
 図2は、第1実施形態の情報処理装置10の機能構成を概念的に示すブロック図である。図2に示されるように、本実施形態の情報処理装置10は、取得部12および出力部14を備える。
[Functional Configuration of Information Processing Apparatus 10]
FIG. 2 is a block diagram conceptually showing the functional configuration of the information processing apparatus 10 according to the first embodiment. As illustrated in FIG. 2, the information processing apparatus 10 according to the present embodiment includes an acquisition unit 12 and an output unit 14.
 取得部12は、車両に搭載された集音装置30を用いて生成された音情報を取得する。また、取得部12は、当該音情報が生成されたときの車両の位置情報を取得する。 The acquisition unit 12 acquires sound information generated using the sound collection device 30 mounted on the vehicle. The acquisition unit 12 acquires vehicle position information when the sound information is generated.
 例えば、取得部12は、集音装置30から音情報を取得したタイミングに合わせて、車両に搭載されたGPS(Global Positioning System)モジュール(図示せず)などから位置情報を取得する。また例えば、取得部12は、集音装置30から音情報を取得したタイミングに合わせて周辺の無線基地局の位置情報を取得し、当該無線基地局の位置情報を用いて車両の位置情報を算出してもよい。このように取得された位置情報は、「音情報が生成されたときの車両の位置情報」として利用することができる。 For example, the acquisition unit 12 acquires position information from a GPS (Global Positioning System) module (not shown) or the like mounted on the vehicle in accordance with the timing at which sound information is acquired from the sound collection device 30. In addition, for example, the acquisition unit 12 acquires the position information of the surrounding radio base stations in accordance with the timing at which the sound information is acquired from the sound collection device 30, and calculates the position information of the vehicle using the position information of the radio base stations. May be. The position information acquired in this way can be used as “position information of the vehicle when sound information is generated”.
 また、取得部12は、車両の位置情報を取得した時刻と、集音装置30を用いて生成された音情報を取得した時刻との対応関係に基づいて、「音情報が生成されたときの車両の位置情報」を特定することもできる。 Further, the acquisition unit 12 determines that “when the sound information is generated is based on the correspondence between the time when the position information of the vehicle is acquired and the time when the sound information generated using the sound collection device 30 is acquired. It is also possible to specify “vehicle position information”.
 その一例について図3を用いて説明する。図3は、取得部12が車両の位置情報の取得時刻と音情報の取得時刻との対応関係に基づいて「音情報が生成されたときの車両の位置情報」を特定する処理の流れを説明するための図である。図3の横軸は時間軸を示している。また、符号Sが付与されているグラフは、取得部12が集音装置30から取得した音情報を示している。また、取得部12は、時刻tおよび時刻tにおいて、車両の位置情報を取得しているものとする。この場合において、取得部12は、時刻の対応関係に基づいて、時刻tに取得した車両の位置情報および時刻tに取得した車両の位置情報を、時刻tから時刻tまでの間に取得された音情報Sが生成されたときの車両の位置情報として特定することができる。ここで、取得部12は、幅を持つ位置情報(時刻tにおける位置および時刻tにおける位置の2点を含む位置情報)を、時刻tから時刻tまでの間に取得された音情報Sが生成されたときの車両の位置情報として特定してもよい。また、取得部12は、時刻tにおける位置から時刻tにおける位置までの間の代表位置(例えば、2つの位置の中間地点など)の位置情報を、時刻tから時刻tまでの間に取得された音情報Sが生成されたときの車両の位置情報として特定してもよい。取得部12は、時刻tより前の音情報Sおよび時刻tよりも後の音情報Sについても、同様にして車両の位置情報を特定することができる。 An example thereof will be described with reference to FIG. FIG. 3 illustrates a flow of processing in which the acquisition unit 12 specifies “vehicle position information when sound information is generated” based on the correspondence between the acquisition time of the vehicle position information and the acquisition time of the sound information. It is a figure for doing. The horizontal axis in FIG. 3 represents the time axis. Further, the graph to which the reference symbol S is given indicates sound information acquired by the acquisition unit 12 from the sound collection device 30. Further, acquisition unit 12, at time t A and time t B, is assumed to get the position information of the vehicle. In this case, acquisition unit 12, based on the time of the correspondence relationship between the obtained position information of the acquired vehicle position information and the time t B of the vehicle at time t A, from the time t A to time t B it can be specified as the position information of the vehicle when the sound information S B obtained in the generated. Here, the acquisition unit 12 obtains position information having a width (position information including two points of the position at the time t A and the position at the time t B ) from the time t A to the time t B. it may be specified as the position information of the vehicle when the information S B is generated. Further, acquisition unit 12, between the position at time t A representative position between the up position at time t B (e.g., waypoints, etc. two positions) of the position information, from the time t A to time t B it may be specified as the position information of the vehicle when the acquired sound information S B were generated. Acquisition unit 12, for the sound information S C later than sound information S A and time t B before time t A, can be similarly identifies the position information of the vehicle.
 出力部14は、取得部12により取得された音情報と車両の位置情報とを互いに紐付けた出力情報を生成する。また、出力部14は、音情報と車両の位置情報とを含む出力情報を所定の出力先へ出力する。所定の出力先は、例えば、サーバ装置20であってもよいし、情報処理装置10に内蔵される記憶装置(図示せず)であってもよい。 The output unit 14 generates output information in which the sound information acquired by the acquisition unit 12 and the vehicle position information are linked to each other. The output unit 14 outputs output information including sound information and vehicle position information to a predetermined output destination. The predetermined output destination may be, for example, the server device 20 or a storage device (not shown) built in the information processing device 10.
 〔サーバ装置20の構成〕
 図4は、第1実施形態のサーバ装置20の機能構成を概念的に示すブロック図である。図4に示されるように、サーバ装置20は、受信部22および関連付け部24を備える。
[Configuration of Server Device 20]
FIG. 4 is a block diagram conceptually showing the functional configuration of the server apparatus 20 of the first embodiment. As illustrated in FIG. 4, the server device 20 includes a reception unit 22 and an association unit 24.
 受信部22は、情報処理装置10の出力部14から出力される出力情報を入力情報として受信する。 The receiving unit 22 receives output information output from the output unit 14 of the information processing apparatus 10 as input information.
 関連付け部24は、受信部22が受信した入力情報に含まれる車両の位置情報を用いて、当該入力情報に含まれる音情報を地図情報に関連付けて所定の記憶部に記憶する。所定の記憶部は、例えば、サーバ装置20が備える内部記憶装置(図示せず)またはサーバ装置20に接続された外部記憶装置である。一例として、関連付け部24は、図5に示すようなテーブルを用いて音情報と地図情報とを関連付けることができる。 The associating unit 24 stores the sound information included in the input information in a predetermined storage unit in association with the map information using the vehicle position information included in the input information received by the receiving unit 22. The predetermined storage unit is, for example, an internal storage device (not shown) included in the server device 20 or an external storage device connected to the server device 20. As an example, the associating unit 24 can associate sound information and map information using a table as shown in FIG.
 図5は、音情報と地図情報とを関連付けるテーブルの一例を示す図である。図5に例示されるテーブルは、「地図上の位置」の列に格納される地図上の位置を示す情報(以下、「地図位置情報」と表記)別に、音情報を格納している。なお、地図位置情報は、例えば、1つの位置を示す情報であってもよいし、3つ以上の位置情報により定義されるエリアを示す情報であってもよい。また、地図位置情報は単なる識別子であってもよい。この場合、具体的な地図上の位置やエリアを示す情報は、当該識別子に紐付けられた状態で別に用意される。 FIG. 5 is a diagram showing an example of a table associating sound information with map information. The table illustrated in FIG. 5 stores sound information for each piece of information indicating the position on the map (hereinafter referred to as “map position information”) stored in the “position on the map” column. The map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information. The map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
 図5の例において、関連付け部24は、受信部22が受信した入力情報に含まれる位置情報と「地図上の位置」の列に格納される地図位置情報とを比較して、当該入力情報に含まれる音情報を格納すべき行(レコード)を特定することができる。具体的には、関連付け部24は、入力情報に含まれる位置情報が示す位置と一致する、或いは、当該位置情報が示す位置を含む地図位置情報を格納する行(レコード)を、「音情報を格納すべき行(レコード)」として特定することができる。そして、関連付け部24は、特定した行(レコード)の「音情報」の列に、受信部22が受信した入力情報に含まれる音情報を格納する。このように、関連付け部24は、音情報を、地図上の位置を示す情報と対応付けて記憶することにより、地図情報と関連付けることができる。 In the example of FIG. 5, the associating unit 24 compares the position information included in the input information received by the receiving unit 22 with the map position information stored in the “position on the map” column, It is possible to specify a row (record) in which the sound information included is to be stored. Specifically, the associating unit 24 displays a line (record) that stores map position information that matches the position indicated by the position information included in the input information or that includes the position indicated by the position information. It can be specified as “row (record) to be stored”. Then, the associating unit 24 stores the sound information included in the input information received by the receiving unit 22 in the “sound information” column of the identified row (record). As described above, the associating unit 24 can associate the sound information with the map information by storing the sound information in association with the information indicating the position on the map.
 上述の構成では、まず、各車両に搭載された集音装置30で生成された音情報と当該音情報が生成されたときの車両の位置情報が互いに紐付けられた状態でサーバ装置20に送られる。 In the above-described configuration, first, the sound information generated by the sound collecting device 30 mounted on each vehicle and the position information of the vehicle when the sound information is generated are sent to the server device 20 in a state of being linked to each other. It is done.
 ここで、音情報には、その音情報が生成されたときの状況を示し得る様々な情報が含まれる。例えば、音情報に人物の音声が含まれていれば、車両の周囲に人物が存在していたことが分かる。また例えば、人物の音声の特徴量(周波数分布など)から、人物の年齢や性別を推測することもできる。また例えば、音情報に緊急車両のサイレン音が含まれていれば、車両の近くを緊急車両が通過したか否かを判定することができる。また例えば、音情報に物体の衝突音や急ブレーキ音などが含まれていれば、車両の周囲における接触事故や急ブレーキが踏まれたことなどを検知することができる。また例えば、音情報に車両のエンジン音やロードノイズが音情報に含まれている場合、車両が高速で移動するほど高周波数域の音量が増加する特性を利用して車両の移動速度を推測することもできる。例えば、速度と音量のピークが表れる周波数帯との対応関係を示すテーブルや、引数として入力された周波数帯に速度と周波数帯との対応関係に応じた係数を乗じることにより該周波数帯から速度を導出する関数などを用いて、音情報から車両の移動速度の推測値を算出することができる。 Here, the sound information includes various information that can indicate the situation when the sound information is generated. For example, if a person's voice is included in the sound information, it can be understood that a person exists around the vehicle. Further, for example, the age and sex of a person can be estimated from the feature amount (frequency distribution or the like) of the person's voice. For example, if the siren sound of an emergency vehicle is contained in sound information, it can be determined whether the emergency vehicle passed near the vehicle. For example, if the sound information includes an object collision sound or a sudden brake sound, it is possible to detect a contact accident or a sudden brake step around the vehicle. For example, when the sound information includes engine sound or road noise of the vehicle, the moving speed of the vehicle is estimated using the characteristic that the volume of the high frequency region increases as the vehicle moves at high speed. You can also. For example, a table showing the correspondence between the speed and the frequency band where the volume peak appears, or the frequency band inputted as an argument is multiplied by a coefficient corresponding to the correspondence between the speed and the frequency band to obtain the speed from the frequency band. An estimated value of the moving speed of the vehicle can be calculated from the sound information using a function to be derived.
 そして、音情報に紐付けられた車両の位置情報に基づいて「地図上の位置」が特定され、その地図上の位置に音情報が関連付けて記憶される。音情報は、地図上の位置別に蓄積されることによって、各々の位置の特性(例えば、歩行者や自転車が多い/少ない、緊急車両の通過頻度が多い/少ない、車両の移動速度が速い/遅い、など)を示し得る情報の集合体となる。つまり、本実施形態によれば、情報密度(付加価値)が高く、様々な面で有用な地図情報を作成することができる。 Then, the “location on the map” is specified based on the position information of the vehicle linked to the sound information, and the sound information is stored in association with the position on the map. Sound information is accumulated for each position on the map, so that characteristics of each position (for example, there are many / few pedestrians and bicycles, frequent / frequent passage of emergency vehicles, fast / slow vehicle movement speed) , Etc.). That is, according to the present embodiment, map information having high information density (added value) and useful in various aspects can be created.
 〔ハードウエア構成〕
 情報処理装置10およびサーバ装置20の各機能構成部は、各機能構成部を実現するハードウエア(例:ハードワイヤードされた電子回路など)で実現されてもよいし、ハードウエアとソフトウエアとの組み合わせ(例:電子回路とそれを制御するプログラムの組み合わせなど)で実現されてもよい。以下、図6を用いて、情報処理装置10およびサーバ装置20の各機能構成部がハードウエアとソフトウエアとの組み合わせで実現される場合について、さらに説明する。図6は、情報処理装置10およびサーバ装置20のハードウエア構成を例示する図である。
[Hardware configuration]
Each functional component of the information processing device 10 and the server device 20 may be realized by hardware (for example, a hard-wired electronic circuit) that implements each functional component, or between hardware and software. It may be realized by a combination (for example, a combination of an electronic circuit and a program for controlling the electronic circuit). Hereinafter, the case where each functional component of the information processing apparatus 10 and the server apparatus 20 is realized by a combination of hardware and software will be further described with reference to FIG. FIG. 6 is a diagram illustrating a hardware configuration of the information processing apparatus 10 and the server apparatus 20.
 <情報処理装置10のハードウエア構成例>
 情報処理装置10は、バス102、プロセッサ104、メモリ106、ストレージデバイス108、入出力インタフェース110、及びネットワークインタフェース112を有する。バス102は、プロセッサ104、メモリ106、ストレージデバイス108、入出力インタフェース110、及びネットワークインタフェース112が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ104などを互いに接続する方法は、バス接続に限定されない。プロセッサ104は、マイクロプロセッサなどを用いて実現される演算処理装置である。メモリ106は、RAM(Random Access Memory)などを用いて実現されるメモリである。ストレージデバイス108は、ROM(Read Only Memory)やフラッシュメモリなどを用いて実現されるストレージデバイスである。
<Hardware Configuration Example of Information Processing Apparatus 10>
The information processing apparatus 10 includes a bus 102, a processor 104, a memory 106, a storage device 108, an input / output interface 110, and a network interface 112. The bus 102 is a data transmission path through which the processor 104, the memory 106, the storage device 108, the input / output interface 110, and the network interface 112 transmit / receive data to / from each other. However, the method of connecting the processors 104 and the like is not limited to bus connection. The processor 104 is an arithmetic processing device realized using a microprocessor or the like. The memory 106 is a memory realized using a RAM (Random Access Memory) or the like. The storage device 108 is a storage device realized using a ROM (Read Only Memory), a flash memory, or the like.
 入出力インタフェース110は、情報処理装置10を周辺機器と接続するためのインタフェースである。例えば、入出力インタフェース110には、車両の現在位置を示す情報を取得するためのGPSモジュール1101が接続される。なお、情報処理装置10は、ネットワークインタフェース112を介して周辺基地局の位置情報を取得し、当該周辺基地局の位置情報を用いて車両の現在位置を推定することもできる。この場合、入出力インタフェース110にはGPSモジュール1101が接続されていなくてもよい。また、入出力インタフェース110には、ユーザからの入力操作を受け付ける各種入力装置、ディスプレイ装置、それらが一体となったタッチパネルなどが更に接続されてもよい。 The input / output interface 110 is an interface for connecting the information processing apparatus 10 to peripheral devices. For example, the input / output interface 110 is connected to a GPS module 1101 for acquiring information indicating the current position of the vehicle. Note that the information processing apparatus 10 can also acquire the position information of the surrounding base station via the network interface 112 and estimate the current position of the vehicle using the position information of the surrounding base station. In this case, the GPS module 1101 may not be connected to the input / output interface 110. The input / output interface 110 may be further connected to various input devices that accept input operations from a user, a display device, a touch panel in which they are integrated, and the like.
 ネットワークインタフェース112は、情報処理装置10を通信網に接続するためのインタフェースである。情報処理装置10は、ネットワークインタフェース112を複数有していてもよい。例えば、情報処理装置10は、CAN通信網に接続するためのネットワークインタフェース112と、WAN(Wide Area Network)通信網に接続するためのネットワークインタフェース112と、近距離無線通信規格(例:Bluetooth(登録商標))をサポートするネットワークインタフェース112とを有する。例えば、情報処理装置10は、WAN通信網を介して外部のサーバ装置20と通信し、サーバ装置20に出力情報を出力することができる。また、情報処理装置10は、近距離無線によって、集音装置30と通信し、集音装置30により生成された音情報を取得することができる。また、情報処理装置10は、CAN通信網を介して、車両の動作(例えば、車両の移動速度など)を示す情報を取得することもできる。 The network interface 112 is an interface for connecting the information processing apparatus 10 to a communication network. The information processing apparatus 10 may have a plurality of network interfaces 112. For example, the information processing apparatus 10 includes a network interface 112 for connecting to a CAN communication network, a network interface 112 for connecting to a WAN (Wide Area Network) communication network, and a short-range wireless communication standard (for example, Bluetooth (registered) And a network interface 112 supporting the trademark). For example, the information processing apparatus 10 can communicate with the external server apparatus 20 via the WAN communication network and output output information to the server apparatus 20. In addition, the information processing apparatus 10 can communicate with the sound collection device 30 by short-range wireless and acquire sound information generated by the sound collection device 30. The information processing apparatus 10 can also acquire information indicating the operation of the vehicle (for example, the moving speed of the vehicle) via the CAN communication network.
 ストレージデバイス108は、情報処理装置10の各機能構成部を実現するためのプログラムモジュールを記憶している。プロセッサ104は、このプログラムモジュールをメモリ106に読み出して実行することで、情報処理装置10の各機能構成部の機能を実現する。 The storage device 108 stores a program module for realizing each functional component of the information processing apparatus 10. The processor 104 reads out the program module to the memory 106 and executes it, thereby realizing the function of each functional component of the information processing apparatus 10.
 <サーバ装置20のハードウエア構成例>
 サーバ装置20は、バス202、プロセッサ204、メモリ206、ストレージデバイス208、入出力インタフェース210、及びネットワークインタフェース212を有する。バス202は、プロセッサ204、メモリ206、ストレージデバイス208、入出力インタフェース210、及びネットワークインタフェース212が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ204などを互いに接続する方法は、バス接続に限定されない。プロセッサ204は、マイクロプロセッサなどを用いて実現される演算処理装置である。メモリ206は、RAM(Random Access Memory)などを用いて実現されるメモリである。ストレージデバイス208は、ROM(Read Only Memory)やフラッシュメモリなどを用いて実現されるストレージデバイスである。
<Hardware Configuration Example of Server Device 20>
The server device 20 includes a bus 202, a processor 204, a memory 206, a storage device 208, an input / output interface 210, and a network interface 212. The bus 202 is a data transmission path through which the processor 204, the memory 206, the storage device 208, the input / output interface 210, and the network interface 212 exchange data with each other. However, the method of connecting the processors 204 and the like is not limited to bus connection. The processor 204 is an arithmetic processing device realized using a microprocessor or the like. The memory 206 is a memory realized using a RAM (Random Access Memory) or the like. The storage device 208 is a storage device realized using a ROM (Read Only Memory), a flash memory, or the like.
 入出力インタフェース210は、サーバ装置20を周辺機器と接続するためのインタフェースである。例えば、入出力インタフェース210には、キーボードやマウスなどの入力装置、LCD(Liquid Crystal Display)などのディスプレイ装置、それらが一体となったタッチパネルなどが接続される。なお、入力装置やディスプレイ装置は、ネットワークインタフェース212を介してネットワーク越しに接続されていてもよい。 The input / output interface 210 is an interface for connecting the server device 20 to peripheral devices. For example, the input / output interface 210 is connected to an input device such as a keyboard and a mouse, a display device such as an LCD (Liquid Crystal Display), a touch panel integrated with them. Note that the input device and the display device may be connected via a network interface 212 over a network.
 ネットワークインタフェース212は、サーバ装置20を通信網に接続するためのインタフェースである。サーバ装置20は、WAN(Wide Area Network)通信網に接続するためのネットワークインタフェース212を有する。例えば、サーバ装置20は、WAN通信網を介して車両に搭載された情報処理装置10と通信し、情報処理装置10からの出力情報(サーバ装置20にとっての入力情報)を取得することができる。 The network interface 212 is an interface for connecting the server device 20 to a communication network. The server device 20 includes a network interface 212 for connecting to a WAN (Wide Area Network) communication network. For example, the server device 20 can communicate with the information processing device 10 mounted on the vehicle via the WAN communication network and acquire output information from the information processing device 10 (input information for the server device 20).
 ストレージデバイス208は、サーバ装置20の各機能構成部を実現するためのプログラムモジュールを記憶している。プロセッサ204は、このプログラムモジュールをメモリ206に読み出して実行することで、サーバ装置20の各機能構成部の機能を実現する。 The storage device 208 stores a program module for realizing each functional component of the server device 20. The processor 204 reads out the program module to the memory 206 and executes it, thereby realizing the function of each functional component of the server device 20.
 〔情報処理装置10の動作例〕
 図7を用いて、情報処理装置10の動作の一例について説明する。図7は、第1実施形態の情報処理装置10により実行される処理の流れを例示するフローチャートである。
[Operation Example of Information Processing Apparatus 10]
An example of the operation of the information processing apparatus 10 will be described with reference to FIG. FIG. 7 is a flowchart illustrating the flow of processing executed by the information processing apparatus 10 according to the first embodiment.
 取得部12は、集音装置30と無線または有線によって通信し、集音装置30により生成された音情報を取得する(S102)。取得部12は、例えば集音装置30に音情報の送信要求を通知する等して、集音装置30から音情報を能動的に取得するように構成されていてもよいし、音情報の送信状況を監視する等して、集音装置30から音情報を受動的に取得するように構成されていてもよい。 The acquisition unit 12 communicates with the sound collection device 30 wirelessly or by wire to obtain sound information generated by the sound collection device 30 (S102). The acquisition unit 12 may be configured to actively acquire sound information from the sound collection device 30 by, for example, notifying the sound collection device 30 of a transmission request for sound information, or transmitting sound information. The sound information may be passively acquired from the sound collecting device 30 by monitoring the situation.
 また、取得部12は、GPSモジュール1101のGPS情報や周辺基地局の位置情報に基づく車両の位置情報を用いて、音情報が生成されたときの位置情報を取得する。取得部12は、例えば図3を用いて説明した方法などによって、音情報が生成されたときの位置情報を取得することができる。 Further, the acquisition unit 12 acquires the position information when the sound information is generated using the position information of the vehicle based on the GPS information of the GPS module 1101 and the position information of the surrounding base stations. The acquisition unit 12 can acquire position information when sound information is generated by, for example, the method described with reference to FIG.
 出力部14は、S102で取得した音情報と、S104で取得した車両の位置情報とを互いに紐付けて、出力情報を生成する(S106)。そして、出力部14は、出力情報をサーバ装置20に出力する(S108)。 The output unit 14 associates the sound information acquired in S102 with the vehicle position information acquired in S104, and generates output information (S106). Then, the output unit 14 outputs the output information to the server device 20 (S108).
 〔情報処理装置10の他の動作例〕
 なお、出力部14は、例えば図8に示すように動作してもよい。図8は、情報処理装置10により実行される処理の他の例を示すフローチャートである。図8のフローチャートは、図7のフローチャートのS110の工程に続けて実行される。
[Another operation example of the information processing apparatus 10]
Note that the output unit 14 may operate as shown in FIG. 8, for example. FIG. 8 is a flowchart illustrating another example of processing executed by the information processing apparatus 10. The flowchart in FIG. 8 is executed following the step S110 in the flowchart in FIG.
 出力部14は、出力情報を所定の記憶部(例えば、情報処理装置10のストレージデバイス108)に一旦蓄積する(S110)。そして、出力部14は、所定の記憶部に蓄積されている出力情報の送信条件を満たすか否かを判定する(S112)。送信条件は、例えば、記憶装置に蓄積されている出力情報の個数が所定の数に達した場合、または、現在時刻が予めスケジュールされた送信時刻に達した場合などである。 The output unit 14 temporarily stores the output information in a predetermined storage unit (for example, the storage device 108 of the information processing apparatus 10) (S110). And the output part 14 determines whether the transmission conditions of the output information accumulate | stored in the predetermined memory | storage part are satisfy | filled (S112). The transmission condition is, for example, when the number of output information accumulated in the storage device reaches a predetermined number or when the current time reaches a scheduled transmission time.
 送信条件を満たさない場合は(S112:NO)、出力部14は後述の処理を実行しない。なお、後に送信条件が満たされた場合に、出力部14は後述の処理を実行する。 When the transmission condition is not satisfied (S112: NO), the output unit 14 does not execute the process described later. Note that when the transmission condition is satisfied later, the output unit 14 performs processing described later.
 送信条件を満たす場合(S112:YES)、出力部14は、所定の記憶部に蓄積されている出力情報を読み出し、サーバ装置20へ送信する(S114)。そして、出力部14は、S114で送信した介入検出情報を所定の記憶部から削除する(S116)。詳細には、出力部14は、出力情報を正常に受信した旨を示す確認信号をサーバ装置20から受信した場合に、送信済みの出力情報を所定の記憶部から削除する。出力部14は、出力情報のレコードそのものを削除してもよいし、出力情報に削除フラグを付与することによって当該出力情報を論理的に削除してもよい。また、出力部14は、送信済みの出力情報にフラグを付与しておき、定期的に実行されるバッチ処理によってフラグが付与されている情報を削除するようにしてもよい。また、出力部14は、サーバ装置20から受信エラーを示す信号を受信した場合、出力情報を再送するように構成されていてもよい。 When the transmission condition is satisfied (S112: YES), the output unit 14 reads the output information stored in the predetermined storage unit and transmits it to the server device 20 (S114). Then, the output unit 14 deletes the intervention detection information transmitted in S114 from the predetermined storage unit (S116). Specifically, when the output unit 14 receives a confirmation signal indicating that the output information has been normally received from the server device 20, the output unit 14 deletes the transmitted output information from the predetermined storage unit. The output unit 14 may delete the output information record itself, or may logically delete the output information by adding a deletion flag to the output information. Further, the output unit 14 may add a flag to the output information that has already been transmitted, and delete the information to which the flag is assigned by batch processing that is periodically executed. The output unit 14 may be configured to retransmit the output information when receiving a signal indicating a reception error from the server device 20.
 またその他の例として、取得部12は、音情報の生成時刻を示す情報を更に取得してもよい。例えば、取得部12は、音情報を取得したタイミングに合わせて情報処理装置10内で管理されている時刻を確認し、そのときの時刻を「音情報の生成時刻」として取得することができる。また、集音装置30が、音情報を生成または送信するタイミングに応じて集音装置30内で管理されている時刻を取得し、「音情報の生成時刻」として当該音情報に紐付けて送信してもよい。そして、出力部14は、出力情報に音情報の生成時刻を更に紐付けて、サーバ装置20やストレージデバイス108に出力してもよい。このようにすることで、音情報の集合体を生成時刻に基づいて分類することができるようになり、より詳細な解析が可能となる。 As another example, the acquisition unit 12 may further acquire information indicating the generation time of sound information. For example, the acquisition unit 12 can check the time managed in the information processing apparatus 10 in accordance with the timing at which the sound information is acquired, and can acquire the time at that time as the “sound information generation time”. In addition, the sound collection device 30 acquires a time managed in the sound collection device 30 according to the timing at which the sound information is generated or transmitted, and transmits the time as “sound information generation time” in association with the sound information. May be. Then, the output unit 14 may further associate the generation time of the sound information with the output information and output it to the server device 20 or the storage device 108. In this way, a collection of sound information can be classified based on the generation time, and a more detailed analysis is possible.
 また他の例として、取得部12は、音情報が生成されたときの車両の動作を示す情報(動作情報)を、CAN通信網を介して更に取得してもよい。取得部12が取得する車両の動作情報の一例としては、例えば、車両の移動速度などが挙げられる。そして、出力部14は、出力情報に車両の動作情報を更に紐付けて、サーバ装置20やストレージデバイス108に出力してもよい。このようにすることで、より詳細に解析することが可能な情報が生成できる。例えば、音情報に風音が含まれている場合に、車両の移動速度の情報によって車両の移動により生じた風切り音か否かを切り分け、その場所の風の強さなどを推測することができる。 As another example, the acquisition unit 12 may further acquire information (operation information) indicating the operation of the vehicle when sound information is generated via the CAN communication network. As an example of the operation information of the vehicle acquired by the acquisition unit 12, for example, the moving speed of the vehicle can be cited. Then, the output unit 14 may further associate the vehicle operation information with the output information and output it to the server device 20 or the storage device 108. In this way, information that can be analyzed in more detail can be generated. For example, when wind information is included in the sound information, it can be determined whether or not the wind noise is generated by the movement of the vehicle based on the information on the moving speed of the vehicle, and the strength of the wind at the location can be estimated. .
 また他の例として、取得部12は、音情報が生成されたときの天候を示す情報(天候情報)を更に取得してもよい。例えば、取得部12は、天候情報を配信するWebサーバにネットワークインタフェース112を介してアクセスして、S104の位置情報が示す位置に対応する天候情報を取得することができる。また、取得部12は、車両に搭載される各種センサ(雨滴センサ、照度センサ、イメージセンサなど)から得られるセンシングデータを用いて、車両の現在位置における天候を推測してもよい。また、取得部12は、CAN通信網を介して取得し可能な車両の制御信号を用いて、車両の現在位置における天候を推測してもよい。例えば、ワイパーを動作させる制御信号が取得された場合、取得部12は、天候が雨であることを示す情報を生成することができる。そして、出力部14は、出力情報に天候情報を更に紐付けて、サーバ装置20やストレージデバイス108に出力してもよい。このようにすることで、音情報の集合体を天候情報に基づいて分類することができるようになり、より詳細な解析が可能となる。 As another example, the acquisition unit 12 may further acquire information (weather information) indicating the weather when the sound information is generated. For example, the acquisition unit 12 can access the Web server that distributes the weather information via the network interface 112 and acquire the weather information corresponding to the position indicated by the position information in S104. Further, the acquisition unit 12 may estimate the weather at the current position of the vehicle using sensing data obtained from various sensors (such as a raindrop sensor, an illuminance sensor, and an image sensor) mounted on the vehicle. Moreover, the acquisition part 12 may estimate the weather in the present position of a vehicle using the control signal of the vehicle which can be acquired via a CAN communication network. For example, when a control signal for operating the wiper is acquired, the acquisition unit 12 can generate information indicating that the weather is rainy. Then, the output unit 14 may further associate the weather information with the output information and output it to the server device 20 or the storage device 108. In this way, the collection of sound information can be classified based on the weather information, and a more detailed analysis is possible.
 また、他の例として、取得部12は、音情報が生成されたときの運転者の状態、すなわち、外部音や当該外部音の音源により運転者がどのような反応を示したかの状態情報を更に取得してもよい。例えば、取得部12は、車両に搭載される車内カメラから運転者の画像を取得し、取得した画像に基づき、運転者の状態(表情や挙動等)を解析することで状態情報を生成する。また、運転者に装着された、または接触する生体センサから運転者の状態(心拍数の変化等)を解析することで、状態情報を生成する。取得部12は、上述した運転者の画像や生体センサの出力信号を、例えば、音情報が生成されたタイミングの直前、直後、またはその双方の数秒~数十秒間に亘って取得すればよい。なお、過去の運転者の画像や生体センサの出力信号は、例えば、ストレージデバイス108などに記憶されている。取得部12は、音情報が生成されたタイミングの直前の運転者の画像や生体センサの出力信号を、ストレージデバイス108などから読み出すことができる。出力部14は、このようにして生成した状態情報を更に対応付けて、サーバ装置20やストレージデバイス108に出力してもよい。
 例えば、十字路等において、車両の運転者の死角となっているところから飛び出してきた自転車が急ブレーキをかけたとする。このとき、集音装置30は自転車の急ブレーキの音を集音し、取得部12は、集音装置30により生成された自転車の急ブレーキに関する音情報を取得する。また、車内カメラは運転者の顔画像を撮影し、生体センサは運転者の心拍数を取得する。このときの顔画像は驚いたような表情となっており、また、心拍数は急に増加している。取得部12は、このような画像及び生体センサのセンシング結果を車内カメラ及び生体センサから取得し、運転者が驚いたことを示す状態情報を生成する。
 また、交差点の手前等において、車両と併走している自転車が信号待ちのためブレーキをかけたとする。
 このとき、集音装置30は急ブレーキの音を集音し、取得部12は、集音装置30により生成された急ブレーキに関する音情報を取得する。このとき、集音装置30は自転車のブレーキの音を集音し、取得部12は、集音装置30により生成された自転車のブレーキに関する音情報を取得する。また、車内カメラは運転者の顔画像を撮影し、生体センサは運転者の心拍数を取得する。このときの顔画像は自転車がブレーキをかける前後で表情に変化は少なく、また、心拍数の変化も少ない。取得部12は、このような画像及び生体センサのセンシング結果を車内カメラ及び生体センサから取得し、ブレーキ音を聞いても運転者の状態に変化が少ないことを示す状態情報を生成する。
 ここで、車内カメラと生体センサの両方を用いたが、これらのうちの一方を用いても良く、また、車内カメラでは運転者の表情に代えて運転者の挙動を撮影してもよい。更に、車内カメラや生体センサに代えて、脳波センサ、振動センサ、車内マイク等のセンサを用いることで、運転者の脳波、運転者の挙動、運転者の言動等を取得してもよい。
 また、上記の複数のセンサを適宜組み合わせてもよい。
 このように、運転者の状態を音情報に対応付けてサーバ装置20やストレージデバイス108に出力するため、より詳細な解析が可能となる。
Further, as another example, the acquisition unit 12 further displays the state of the driver when the sound information is generated, that is, the state information indicating how the driver has reacted with the external sound or the sound source of the external sound. You may get it. For example, the acquisition unit 12 acquires a driver's image from an in-vehicle camera mounted on the vehicle, and generates state information by analyzing the driver's state (facial expression, behavior, etc.) based on the acquired image. In addition, state information is generated by analyzing the state of the driver (change in heart rate, etc.) from a biosensor worn by or in contact with the driver. The acquisition unit 12 may acquire the driver's image and the output signal of the biosensor described above over, for example, several seconds to several tens of seconds immediately before, immediately after, or both of the timing when the sound information is generated. Note that past driver images and biosensor output signals are stored in, for example, the storage device 108. The acquisition unit 12 can read the driver image immediately before the timing at which the sound information is generated or the output signal of the biosensor from the storage device 108 or the like. The output unit 14 may further associate the state information generated in this way and output the state information to the server device 20 or the storage device 108.
For example, it is assumed that a bicycle jumping out from a blind spot of a vehicle driver on a crossroad or the like suddenly brakes. At this time, the sound collecting device 30 collects the sound of the sudden braking of the bicycle, and the acquisition unit 12 obtains sound information relating to the sudden braking of the bicycle generated by the sound collecting device 30. The in-vehicle camera captures the driver's face image, and the biometric sensor acquires the driver's heart rate. The facial image at this time has a surprised expression, and the heart rate suddenly increases. The acquisition unit 12 acquires the image and the sensing result of the biological sensor from the in-vehicle camera and the biological sensor, and generates state information indicating that the driver is surprised.
Also, it is assumed that a bicycle running alongside the vehicle applied a brake to wait for a signal in front of an intersection or the like.
At this time, the sound collecting device 30 collects the sudden braking sound, and the acquisition unit 12 obtains sound information relating to the sudden braking generated by the sound collecting device 30. At this time, the sound collecting device 30 collects the sound of the bicycle brake, and the acquisition unit 12 obtains sound information relating to the bicycle brake generated by the sound collecting device 30. The in-vehicle camera captures the driver's face image, and the biometric sensor acquires the driver's heart rate. The facial image at this time has little change in facial expression before and after the bicycle brakes, and there is little change in heart rate. The acquisition unit 12 acquires such an image and the sensing result of the biological sensor from the in-vehicle camera and the biological sensor, and generates state information indicating that there is little change in the driver's state even when the brake sound is heard.
Here, both the in-vehicle camera and the biosensor are used, but one of them may be used, and the in-vehicle camera may take an image of the behavior of the driver instead of the driver's facial expression. Further, by using sensors such as an electroencephalogram sensor, a vibration sensor, and an in-vehicle microphone instead of the in-vehicle camera and the biological sensor, the driver's brain wave, the driver's behavior, the driver's behavior and the like may be acquired.
Moreover, you may combine said several sensor suitably.
As described above, since the driver's state is associated with the sound information and output to the server device 20 and the storage device 108, more detailed analysis is possible.
 〔サーバ装置20の動作例〕
 図9を用いて、サーバ装置20の動作の一例について説明する。図9は、第1実施形態のサーバ装置20により実行される処理の流れを例示するフローチャートである。
[Operation Example of Server Device 20]
An example of the operation of the server device 20 will be described with reference to FIG. FIG. 9 is a flowchart illustrating the flow of processing executed by the server device 20 according to the first embodiment.
 受信部22は、情報処理装置10から出力された出力情報(音情報および車両の位置情報)を、入力情報として取得する(S202)。受信部22は、取得した入力情報を、例えば、図5に示されるようなテーブルとは別のテーブル(入力情報をそのまま記憶するためのテーブルなど)に記憶する(S204)。この別のテーブルは、例えば、ストレージデバイス208上に予め用意される。 The receiving unit 22 acquires output information (sound information and vehicle position information) output from the information processing apparatus 10 as input information (S202). The receiving unit 22 stores the acquired input information, for example, in a table (such as a table for storing input information as it is) different from the table shown in FIG. 5 (S204). This other table is prepared in advance on the storage device 208, for example.
 関連付け部24は、上記別のテーブルに記憶した音情報を車両の位置情報に基づいてグループ分けし、グループ毎の音情報の取得数を集計する(S206)。例えば、関連付け部24は、音情報を、当該音情報に紐付けられた車両の位置情報が示す位置毎または当該位置を含むエリア毎にグループ分けすることができる。 The associating unit 24 groups the sound information stored in the other table based on the vehicle position information, and totals the number of acquired sound information for each group (S206). For example, the associating unit 24 can group the sound information for each position indicated by the position information of the vehicle associated with the sound information or for each area including the position.
 関連付け部24は、グループの中の1つを選択し(S208)、グループ毎に集計した音情報の取得数が所定の閾値(基準)以上であるか否かを判定する(S210)。所定の閾値は、例えば、後の解析において担保すべき信頼性に応じた所定の値として、関連付け部24のプログラムモジュール上で定義される。具体的には、基準となる閾値が大きいほど、地図情報に関連付けるためにはより多くの音情報が必要となる。結果として、解析に利用できる情報の数が増えるため、解析結果の信頼性を向上させることができる。 The associating unit 24 selects one of the groups (S208), and determines whether or not the number of acquired sound information for each group is equal to or greater than a predetermined threshold (reference) (S210). The predetermined threshold is defined on the program module of the associating unit 24 as, for example, a predetermined value corresponding to the reliability to be secured in later analysis. Specifically, the larger the threshold value serving as a reference, the more sound information is required to associate with the map information. As a result, since the number of information that can be used for analysis increases, the reliability of the analysis result can be improved.
 グループ毎に集計した音情報の取得数が所定の閾値未満である場合(S210:NO)、関連付け部24は、未だ選択されていないグループが存在するか否かを判定する(S214)。全てグループが選択済みである場合(S214:NO)、サーバ装置20は処理を終了する。一方、未だ選択されていないグループが存在する場合(S214:YES)、関連付け部24はそのグループを選択し(S208)、そのグループの音情報の取得数が所定の閾値以上であるか否かを再度判定する(S210)。 If the number of acquired sound information for each group is less than the predetermined threshold (S210: NO), the associating unit 24 determines whether there is a group that has not yet been selected (S214). If all groups have been selected (S214: NO), the server device 20 ends the process. On the other hand, when there is a group that has not been selected yet (S214: YES), the associating unit 24 selects the group (S208), and determines whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold. It is determined again (S210).
 グループ毎に集計した音情報の取得数が所定の閾値以上である場合(S208:YES)、関連付け部24は、そのグループに属する音情報を、車両の位置情報を用いて地図情報と関連付ける(S212)。例えば、関連付け部24は、各音情報に紐付けられた車両の位置情報を用いて、当該音情報を関連付けるべき地図上の位置を特定することができる。例えば、関連付け部24は、図5に示されるようなテーブルの「地図上の位置(地図位置情報)」を参照して、各グループの音情報を関連づけるべき行(レコード)を特定することができる。そして、関連付け部24は、特定した行(レコード)の「音情報」の列に、上述のグループの音情報を格納することにより、各音情報を地図情報に関連付けることができる。 When the number of acquired sound information for each group is equal to or greater than a predetermined threshold (S208: YES), the associating unit 24 associates the sound information belonging to that group with the map information using the vehicle position information (S212). ). For example, the associating unit 24 can specify the position on the map to which the sound information should be associated using the vehicle position information associated with each sound information. For example, the associating unit 24 can identify the row (record) to which the sound information of each group should be associated with reference to the “position on the map (map position information)” in the table as shown in FIG. . Then, the associating unit 24 can associate each sound information with the map information by storing the sound information of the group described above in the “sound information” column of the identified row (record).
 そして、関連付け部24は、未だ選択されていないグループが存在するか否かを判定する(S214)。全てグループが選択済みである場合(S214:NO)、サーバ装置20は処理を終了する。一方、未だ選択されていないグループが存在する場合(S214:YES)、関連付け部24はそのグループを選択し(S208)、そのグループの音情報の取得数が所定の閾値以上であるか否かを再度判定する(S214)。 Then, the associating unit 24 determines whether there is a group not yet selected (S214). If all groups have been selected (S214: NO), the server device 20 ends the process. On the other hand, when there is a group that has not been selected yet (S214: YES), the associating unit 24 selects the group (S208), and determines whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold. The determination is made again (S214).
 なお、上述の処理において、受信部22が受信した入力情報に、音情報が生成された時刻を示す情報、音情報が生成されたときの車両の動作情報、または、音情報が生成されたときの天候情報が更に含まれている場合、関連付け部24は、これらの情報を音情報と共に記憶してもよい。このようにすることで、地図情報に関連付けられる音情報の集合体から、より詳細な情報を得ることができるが可能となる。 In the above-described processing, information indicating the time when sound information is generated, vehicle operation information when sound information is generated, or sound information is generated in the input information received by the receiving unit 22. If the weather information is further included, the associating unit 24 may store the information together with the sound information. In this way, more detailed information can be obtained from a collection of sound information associated with map information.
 [第2実施形態]
 〔機能構成〕
 図10は、第2実施形態における情報処理装置10の機能構成を概念的に示すブロック図である。図10に示されるように、本実施形態の情報処理装置10は、音解析部16を更に備える。
[Second Embodiment]
[Function configuration]
FIG. 10 is a block diagram conceptually showing the functional configuration of the information processing apparatus 10 in the second embodiment. As illustrated in FIG. 10, the information processing apparatus 10 according to the present embodiment further includes a sound analysis unit 16.
 音解析部16は、取得部12により取得された音情報を解析して音解析情報を生成する。音解析部16は、例えば、音情報(電気信号)の音圧レベルに基づいて、当該音情報の音量を示す音解析情報を生成することができる。また例えば、音解析部16は、周波数解析用ソフトウエアなどを用いて音情報を各周波数成分に分解し、周波数成分毎の音情報の音圧レベルに基づいて周波数別の音量を示す音解析情報を生成してもよい。また例えば、音解析部16は、既知の音源分離アルゴリズムおよび音源同定アルゴリズムを用いて、音源を示す音解析情報を生成してもよい。音解析部16は、これらのアルゴリズムを用いて、例えば、人物の音声、車両の走行音(例:エンジン音、ロードノイズ)、緊急車両のサイレン音、急ブレーキ音、自転車のチェーンの音などを、音情報の中から抽出することができる。また、音解析部16は、年齢および性別毎の音声サンプルの解析結果(フォルマントの分布など)を基に生成した辞書データベースを用いて、音源が人物である場合においてその人物の年齢や性別を示す音解析情報を生成することもできる。 The sound analysis unit 16 analyzes the sound information acquired by the acquisition unit 12 and generates sound analysis information. The sound analysis unit 16 can generate sound analysis information indicating the volume of the sound information based on the sound pressure level of the sound information (electric signal), for example. Further, for example, the sound analysis unit 16 decomposes the sound information into frequency components using frequency analysis software and the like, and the sound analysis information indicating the volume for each frequency based on the sound pressure level of the sound information for each frequency component May be generated. For example, the sound analysis unit 16 may generate sound analysis information indicating a sound source using a known sound source separation algorithm and sound source identification algorithm. The sound analysis unit 16 uses these algorithms to generate, for example, a person's voice, a vehicle running sound (eg, engine sound, road noise), an emergency vehicle siren sound, a sudden brake sound, a bicycle chain sound, and the like. , It can be extracted from the sound information. In addition, the sound analysis unit 16 indicates the age and sex of a person when the sound source is a person using a dictionary database generated based on the analysis result (formant distribution, etc.) of the sound sample for each age and sex. Sound analysis information can also be generated.
 また、本実施形態の出力部14は、音解析部16の解析によって音情報から生成される音解析情報と、取得部12により取得された車両の位置情報とを紐付けた出力情報を生成し、所定の出力先(例:情報処理装置10のストレージデバイス108、サーバ装置20)に出力する。 Further, the output unit 14 of the present embodiment generates output information in which the sound analysis information generated from the sound information by the analysis of the sound analysis unit 16 and the vehicle position information acquired by the acquisition unit 12 are linked. And output to a predetermined output destination (for example, the storage device 108 of the information processing apparatus 10 and the server apparatus 20).
 また、本実施形態の受信部22は、情報処理装置10の出力部14から出力される音解析情報を含む出力情報を入力情報として受信する。上述したように、音解析情報は、音量レベルの解析結果(総合的な音量レベルや周波数別の音量レベルの解析結果)、人物(大人/子ども)の音声の検出結果、自転車のチェーン音の検出結果、緊急車両のサイレン音の検出結果、急ブレーキ音の検出結果、および、衝突音の検出結果、などを含む。 Further, the receiving unit 22 of the present embodiment receives output information including sound analysis information output from the output unit 14 of the information processing apparatus 10 as input information. As described above, sound analysis information includes volume level analysis results (total volume level and volume level analysis results), person (adult / child) voice detection results, and bicycle chain sound detection. The result includes the detection result of the siren sound of the emergency vehicle, the detection result of the sudden brake sound, the detection result of the collision sound, and the like.
 また、本実施形態の関連付け部24は、入力情報に含まれる車両の位置情報を用いて、当該入力情報に含まれる音解析情報を地図情報に関連付けて記憶する。関連付け部24は、入力情報に含まれる車両の位置情報と地図上の位置情報とを比較し、その比較結果に基づいて音解析情報を地図情報と関連付けることができる。一例として、関連付け部24は、図11に示すようなテーブルを利用して、音解析情報と地図情報とを関連付けることができる。 Further, the association unit 24 of the present embodiment stores the sound analysis information included in the input information in association with the map information using the vehicle position information included in the input information. The associating unit 24 can compare the position information of the vehicle included in the input information with the position information on the map, and associate the sound analysis information with the map information based on the comparison result. As an example, the associating unit 24 can associate sound analysis information and map information using a table as shown in FIG.
 図11は、音解析情報と地図情報とを関連付けるテーブルの一例を示す図である。図11に例示されるテーブルは、「地図上の位置」の列に格納される地図上の位置を示す情地図位置情報別に、音解析情報を格納している。なお、地図位置情報は、例えば、1つの位置を示す情報であってもよいし、3つ以上の位置情報により定義されるエリアを示す情報であってもよい。また、地図位置情報は単なる識別子であってもよい。この場合、具体的な地図上の位置やエリアを示す情報は、当該識別子に紐付けられた状態で別に用意される。 FIG. 11 is a diagram showing an example of a table for associating sound analysis information with map information. The table illustrated in FIG. 11 stores sound analysis information for each information map position information indicating the position on the map stored in the “position on the map” column. The map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information. The map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
 図11の例において、関連付け部24は、受信部22が受信した入力情報に含まれる位置情報と「地図上の位置」の列に格納される地図位置情報とを比較して、当該入力情報に含まれる音解析情報を格納すべき行(レコード)を特定することができる。具体的には、関連付け部24は、入力情報に含まれる位置情報が示す位置と一致する、或いは、当該位置情報が示す位置を含む地図位置情報を格納する行(レコード)を、「音解析情報を格納すべき行(レコード)」として特定することができる。そして、関連付け部24は、特定した行(レコード)の「音解析情報」の列に、受信部22が受信した入力情報に含まれる音解析情報を格納する。このように、関連付け部24は、音解析情報を、地図上の位置を示す情報と対応付けて記憶することにより、地図情報と関連付けることができる。また、関連付け部24は、情報処理装置10から音解析情報の基となった音情報を更に取得している場合、当該音情報を更に紐付けて記憶してもよい。この場合、図11のテーブルに「音情報」の列が更に設けられる。 In the example of FIG. 11, the associating unit 24 compares the position information included in the input information received by the receiving unit 22 with the map position information stored in the “position on the map” column, A row (record) in which the included sound analysis information is to be stored can be specified. Specifically, the associating unit 24 displays a line (record) that stores map position information that matches the position indicated by the position information included in the input information or that includes the position indicated by the position information as “sound analysis information”. Can be specified as “row (record) to be stored”. Then, the associating unit 24 stores the sound analysis information included in the input information received by the receiving unit 22 in the “sound analysis information” column of the identified row (record). As described above, the association unit 24 can associate the sound analysis information with the map information by storing the sound analysis information in association with the information indicating the position on the map. Further, in the case where the sound information that is the basis of the sound analysis information is acquired from the information processing apparatus 10, the associating unit 24 may further store the sound information in association with it. In this case, a column of “sound information” is further provided in the table of FIG.
 〔ハードウエア構成〕
 本実施形態の情報処理装置10は、第1実施形態と同様に、図6に示すようなハードウエア構成を有する。本実施形態のストレージデバイス108は、音解析部16を実現するためのプログラムモジュールを更に記憶している。プロセッサ104は、このプログラムモジュールをメモリ106に読み出して実行することで、音解析部16の機能を更に実現する。
[Hardware configuration]
As in the first embodiment, the information processing apparatus 10 according to the present embodiment has a hardware configuration as shown in FIG. The storage device 108 of this embodiment further stores a program module for realizing the sound analysis unit 16. The processor 104 further realizes the function of the sound analysis unit 16 by reading the program module into the memory 106 and executing it.
 このように、本実施形態では、集音装置30により生成された音情報の解析が各車両の情報処理装置10で実行され、その音解析情報が出力情報に含まれる。これにより、本実施形態によれば、サーバ装置20において音情報の解析処理にかかる負荷を低減させることができる。 Thus, in this embodiment, the analysis of the sound information generated by the sound collection device 30 is executed by the information processing device 10 of each vehicle, and the sound analysis information is included in the output information. Thereby, according to this embodiment, the load concerning the analysis process of sound information in the server apparatus 20 can be reduced.
 [第3実施形態]
 〔機能構成〕
 図12は、第3実施形態における解析用サーバ装置40の機能構成を概念的に示すブロック図である。図12に示されるように、本実施形態の解析用サーバ装置40は、取得部42および付加情報生成部44を備える。なお、解析用サーバ装置40は、上述の各実施形態におけるサーバ装置20であってもよいし、サーバ装置20とは別に設けられた他の装置であってもよい。
[Third Embodiment]
[Function configuration]
FIG. 12 is a block diagram conceptually showing the functional structure of the analysis server device 40 in the third embodiment. As shown in FIG. 12, the analysis server device 40 of this embodiment includes an acquisition unit 42 and an additional information generation unit 44. The analysis server device 40 may be the server device 20 in each of the above-described embodiments, or may be another device provided separately from the server device 20.
 取得部42は、車両に搭載された集音装置を用いて生成された音情報と、当該音情報に紐付けられた位置情報とを取得する。音情報に紐付けられた位置情報は、例えば、上述の各実施形態で説明した「音情報が生成されたときの車両の位置情報」や「地図位置情報」である。 The acquisition unit 42 acquires sound information generated using a sound collecting device mounted on the vehicle and position information associated with the sound information. The position information associated with the sound information is, for example, “position information of the vehicle when the sound information is generated” or “map position information” described in the above embodiments.
 具体的には、取得部42は、音情報と車両の位置情報を、上述の各実施形態で説明した情報処理装置10から取得することができる。なお、情報処理装置10が音情報の解析結果(音解析情報)を生成して送信する場合、取得部42は音解析情報と車両の位置情報を取得することができる。この場合、付加情報生成部44は音情報を解析しなくてよい。また、上述の各実施形態で説明したサーバ装置20などが、情報処理装置10から取得した音情報と車両の位置情報を所定のテーブルに蓄積している場合、取得部42は、そのテーブルを参照して音情報と車両の位置情報とを取得することができる。なお、音情報の解析結果(音解析情報)が既に生成されてテーブル内に格納されている場合、取得部42は音解析情報と車両の位置情報を取得することができる。この場合、付加情報生成部44は音情報を解析しなくてよい。また、付加情報生成部44は、例えば図5や図11に示されるようなテーブルを参照して、音情報または音解析情報と、これらの情報が紐付けられた地図位置情報とを取得することができる。付加情報生成部44は、図11のテーブルを参照する場合には、音情報を解析しなくてもよい。 Specifically, the acquisition unit 42 can acquire sound information and vehicle position information from the information processing apparatus 10 described in the above embodiments. In addition, when the information processing apparatus 10 produces | generates and transmits the analysis result (sound analysis information) of sound information, the acquisition part 42 can acquire sound analysis information and the positional information on a vehicle. In this case, the additional information generation unit 44 does not have to analyze the sound information. In addition, when the server device 20 or the like described in each of the above-described embodiments accumulates sound information and vehicle position information acquired from the information processing device 10 in a predetermined table, the acquisition unit 42 refers to the table. Thus, sound information and vehicle position information can be acquired. When the analysis result (sound analysis information) of the sound information is already generated and stored in the table, the acquisition unit 42 can acquire the sound analysis information and the vehicle position information. In this case, the additional information generation unit 44 does not have to analyze the sound information. Further, the additional information generation unit 44 refers to a table as shown in FIG. 5 or FIG. 11, for example, and acquires sound information or sound analysis information and map position information associated with these information. Can do. The additional information generation unit 44 does not have to analyze the sound information when referring to the table of FIG.
 付加情報生成部44は、取得部42が取得した音情報の解析結果を用いて生成される付加情報を生成する。付加情報生成部44は、音情報を解析することにより生成される、音情報の音量の解析結果、音情報の周波数帯の解析結果、及び、音情報の音源の解析結果のうち少なくとも1つ以上を含む音解析情報を用いて、例えば次のようにして付加情報を生成することができる。なお以下では、付加情報生成部44が音情報の解析を行う場合について例示する。付加情報生成部44は、例えば、他の処理部による音情報の各種解析結果を取得するように構成されていてもよい。 The additional information generation unit 44 generates additional information generated using the analysis result of the sound information acquired by the acquisition unit 42. The additional information generation unit 44 generates at least one of a sound information volume analysis result, a sound information frequency band analysis result, and a sound information sound source analysis result generated by analyzing the sound information. For example, additional information can be generated as follows using sound analysis information including: In the following, an example in which the additional information generation unit 44 analyzes sound information will be described. For example, the additional information generation unit 44 may be configured to acquire various analysis results of sound information by other processing units.
 まず、付加情報生成部44は、第2実施形態の音解析部16と同様にして、取得部42が取得した音情報を解析し、その解析結果を示す情報(音解析情報)を生成する。付加情報生成部44は、例えば、音情報の音量(総合的な音量または周波数別の音量)の解析結果及び音情報の音源(例えば、人、自転車、緊急車両、急ブレーキなど)の解析結果のうち少なくとも1つ以上を含む音解析情報を生成する。 First, the additional information generation unit 44 analyzes the sound information acquired by the acquisition unit 42 in the same manner as the sound analysis unit 16 of the second embodiment, and generates information (sound analysis information) indicating the analysis result. For example, the additional information generation unit 44 analyzes the analysis result of the sound information volume (total volume or volume by frequency) and the analysis result of the sound information sound source (for example, a person, a bicycle, an emergency vehicle, a sudden brake, etc.). Sound analysis information including at least one of them is generated.
 そして、付加情報生成部44は、音解析情報が示す解析結果の統計を取り、その統計に基づいて付加情報を生成する。例えば、付加情報生成部44は、音解析情報から算出可能な、音情報の音量レベルの平均値、中央値、最頻値などに基づいて、音量レベルが大きい/小さいといった特徴を示す付加情報を生成することができる。この場合、付加情報生成部44は、音量レベルが大きいまたは小さいと判断する所定の閾値とこれらの値とを比較することによって、音量レベルの大小を判定することができる。閾値は、付加情報生成部44のプログラムモジュールなどで定義される。また例えば、付加情報生成部44は、音解析情報に含まれる各音源の検出頻度に基づいて、歩行者や自転車が多い/少ない、緊急車両の通過頻度が多い/少ない、車両の移動速度が速い/遅い、などの特徴を示す付加情報を生成することができる。なお、付加情報生成部44は、例えば、検出頻度の高低(多いか少ないか)を判定する閾値と音源毎の検出数とを比較することにより、各音源の検出頻度の高低(多いか少ないか)を判定することができる。閾値は、付加情報生成部44のプログラムモジュールなどで定義される。閾値は、音源毎に異なる値として定義されていてもよいし、全ての音源に共通の値として定義されていてもよい。また例えば、付加情報生成部44は、ロードノイズやエンジン音の周波数分布から推測可能な、速度の平均値、中央値、最頻値などに基づいて、車両の平均移動速度が速い/遅いといった特徴を示す付加情報を生成することができる。 Then, the additional information generation unit 44 takes statistics of the analysis result indicated by the sound analysis information, and generates additional information based on the statistics. For example, the additional information generation unit 44 can calculate additional information indicating a feature that the volume level is large / small based on the average value, median value, mode value, etc. of the volume level of the sound information, which can be calculated from the sound analysis information. Can be generated. In this case, the additional information generation unit 44 can determine the level of the volume level by comparing these values with a predetermined threshold value that determines that the volume level is high or low. The threshold is defined by a program module of the additional information generation unit 44 or the like. Further, for example, the additional information generation unit 44 has a high / low number of pedestrians and bicycles, a high / low frequency of passing emergency vehicles, and a high vehicle moving speed based on the detection frequency of each sound source included in the sound analysis information. Additional information indicating characteristics such as / slow can be generated. For example, the additional information generation unit 44 compares the threshold value for determining the level of detection frequency (high or low) with the number of detections for each sound source, thereby detecting the level of detection frequency (high or low) for each sound source. ) Can be determined. The threshold is defined by a program module of the additional information generation unit 44 or the like. The threshold value may be defined as a different value for each sound source, or may be defined as a value common to all sound sources. Further, for example, the additional information generation unit 44 is characterized in that the average moving speed of the vehicle is fast / slow based on the average value, median value, mode value, etc. of the speed that can be estimated from the frequency distribution of road noise and engine sound. Can be generated.
 なお、付加情報を生成する際に用いる音解析情報の数が多いほど(つまり、特徴を把握する際のサンプル数が多いほど)、付加情報の信頼性は高くなる。そこで、付加情報生成部44は、(1)音情報に紐付けられた位置情報によって分類されるグループ別に音情報の個数を集計し、(2)グループ別の音情報の個数が基準の閾値以上であるグループを選択し、(3)選択したグループに属する音情報の解析結果を用いて付加情報を生成してもよい。 It should be noted that the reliability of the additional information increases as the number of sound analysis information used for generating the additional information increases (that is, as the number of samples for grasping the feature increases). Therefore, the additional information generation unit 44 (1) totals the number of sound information for each group classified by the position information associated with the sound information, and (2) the number of sound information for each group is equal to or greater than a reference threshold. (3) Additional information may be generated using the analysis result of sound information belonging to the selected group.
 また付加情報生成部44は、取得部42が取得した位置情報を用いて特定される地図情報上の位置に付加情報を関連付ける。一例として、付加情報生成部44は、図13に示すようなテーブルを用いて音情報と地図情報とを関連付けることができる。 Further, the additional information generation unit 44 associates the additional information with the position on the map information specified by using the position information acquired by the acquisition unit 42. As an example, the additional information generation unit 44 can associate sound information and map information using a table as shown in FIG.
 図13は、付加情報と地図情報とを関連付けるテーブルの一例を示す図である。図13に例示されるテーブルは、地図位置情報別に付加情報を格納している。なお、地図位置情報は、例えば、1つの位置を示す情報であってもよいし、3つ以上の位置情報により定義されるエリアを示す情報であってもよい。また、地図位置情報は単なる識別子であってもよい。この場合、具体的な地図上の位置やエリアを示す情報は、当該識別子に紐付けられた状態で別に用意される。 FIG. 13 is a diagram illustrating an example of a table for associating additional information with map information. The table illustrated in FIG. 13 stores additional information for each map position information. The map position information may be, for example, information indicating one position or information indicating an area defined by three or more position information. The map position information may be a simple identifier. In this case, information indicating a specific position or area on the map is separately prepared in a state associated with the identifier.
 図13の例において、付加情報生成部44は、取得部42が取得した位置情報と「地図上の位置」の列に格納される地図位置情報とを比較して、生成した付加情報を格納すべき行(レコード)を特定することができる。具体的には、付加情報生成部44は、取得部42が取得した位置情報が示す位置と一致する、或いは、当該位置情報が示す位置を含む地図位置情報を格納する行(レコード)を、「付加情報を格納すべき行(レコード)」として特定することができる。そして、付加情報生成部44は、特定した行(レコード)の「音情報」の列に付加情報を格納する。このように、付加情報生成部44は、付加情報を、地図上の位置を示す情報と対応付けて記憶することにより、地図情報と関連付けることができる。 In the example of FIG. 13, the additional information generation unit 44 compares the position information acquired by the acquisition unit 42 with the map position information stored in the “position on the map” column, and stores the generated additional information. The power line (record) can be specified. Specifically, the additional information generation unit 44 displays a row (record) that stores map position information that matches the position indicated by the position information acquired by the acquisition unit 42 or includes the position indicated by the position information. It can be specified as “row (record) in which additional information is to be stored”. Then, the additional information generation unit 44 stores the additional information in the “sound information” column of the identified row (record). As described above, the additional information generation unit 44 can associate the additional information with the map information by storing the additional information in association with the information indicating the position on the map.
 上述の構成では、まず、各車両に搭載された集音装置30で生成された音情報を取得し、その音情報の解析結果を用いて、地図情報に付加する付加情報が生成される。この付加情報は、例えば、音量レベルが高い/低い、歩行者や自転車が多い/少ない、緊急車両の通過頻度が多い/少ない、車両の移動速度が速い/遅いなどの特徴を示す情報である。そして、音情報に紐付けられた位置情報に基づいて「地図上の位置」が特定され、その地図上の位置に付加情報が関連付けられる。これにより、地図上の位置に応じた特徴を示す情報を包含していて、様々な面で有用な地図情報を作成することができる。 In the above-described configuration, first, sound information generated by the sound collection device 30 mounted on each vehicle is acquired, and additional information to be added to the map information is generated using the analysis result of the sound information. This additional information is, for example, information indicating characteristics such as high / low volume level, high / low pedestrians and bicycles, high / low frequency of emergency vehicles, and high / low vehicle moving speed. Then, a “position on the map” is specified based on the position information associated with the sound information, and additional information is associated with the position on the map. Thereby, the information which shows the characteristic according to the position on a map is included, and map information useful in various aspects can be created.
 〔ハードウエア構成〕
 解析用サーバ装置40の各機能構成部は、各機能構成部を実現するハードウエア(例:ハードワイヤードされた電子回路など)で実現されてもよいし、ハードウエアとソフトウエアとの組み合わせ(例:電子回路とそれを制御するプログラムの組み合わせなど)で実現されてもよい。以下、図14を用いて、解析用サーバ装置40の各機能構成部がハードウエアとソフトウエアとの組み合わせで実現される場合について、さらに説明する。図14は、解析用サーバ装置40のハードウエア構成を例示する図である。
[Hardware configuration]
Each function component of the analysis server device 40 may be realized by hardware (for example, a hard-wired electronic circuit) that realizes each function component, or a combination of hardware and software (example) : A combination of an electronic circuit and a program for controlling the electronic circuit). Hereinafter, the case where each function component of the analysis server device 40 is realized by a combination of hardware and software will be further described with reference to FIG. FIG. 14 is a diagram illustrating a hardware configuration of the analysis server device 40.
 解析用サーバ装置40は、バス402、プロセッサ404、メモリ406、ストレージデバイス408、入出力インタフェース410、及びネットワークインタフェース412を有する。バス402は、プロセッサ404、メモリ406、ストレージデバイス408、入出力インタフェース410、及びネットワークインタフェース412が、相互にデータを送受信するためのデータ伝送路である。ただし、プロセッサ404などを互いに接続する方法は、バス接続に限定されない。プロセッサ404は、マイクロプロセッサなどを用いて実現される演算処理装置である。メモリ406は、RAM(Random Access Memory)などを用いて実現されるメモリである。ストレージデバイス408は、ROM(Read Only Memory)やフラッシュメモリなどを用いて実現されるストレージデバイスである。 The analysis server device 40 includes a bus 402, a processor 404, a memory 406, a storage device 408, an input / output interface 410, and a network interface 412. The bus 402 is a data transmission path through which the processor 404, the memory 406, the storage device 408, the input / output interface 410, and the network interface 412 transmit / receive data to / from each other. However, the method of connecting the processors 404 and the like is not limited to bus connection. The processor 404 is an arithmetic processing unit realized using a microprocessor or the like. The memory 406 is a memory realized using a RAM (Random Access Memory) or the like. The storage device 408 is a storage device realized by using a ROM (Read Only Memory), a flash memory, or the like.
 入出力インタフェース410は、解析用サーバ装置40を周辺機器と接続するためのインタフェースである。例えば、入出力インタフェース410には、キーボードやマウスなどの入力装置、LCD(Liquid Crystal Display)などのディスプレイ装置、それらが一体となったタッチパネルなどが接続される。なお、入力装置やディスプレイ装置は、ネットワークインタフェース412を介してネットワーク越しに接続されていてもよい。 The input / output interface 410 is an interface for connecting the analysis server device 40 to peripheral devices. For example, the input / output interface 410 is connected to an input device such as a keyboard and a mouse, a display device such as an LCD (Liquid Crystal Display), a touch panel integrated with them. Note that the input device and the display device may be connected via a network interface 412 over a network.
 ネットワークインタフェース412は、解析用サーバ装置40を通信網に接続するためのインタフェースである。解析用サーバ装置40は、WAN(Wide Area Network)通信網に接続するためのネットワークインタフェース412を有する。 The network interface 412 is an interface for connecting the analysis server device 40 to a communication network. The analysis server device 40 has a network interface 412 for connecting to a WAN (Wide Area Network) communication network.
 ストレージデバイス408は、解析用サーバ装置40の各機能構成部を実現するためのプログラムモジュールを記憶している。プロセッサ404は、このプログラムモジュールをメモリ406に読み出して実行することで、解析用サーバ装置40の各機能構成部の機能を実現する。 The storage device 408 stores a program module for realizing each functional component of the analysis server device 40. The processor 404 reads out the program module to the memory 406 and executes it, thereby realizing the functions of the functional components of the analysis server device 40.
 〔動作例〕
 図15は、第3実施形態における解析用サーバ装置40が実行する処理の流れを例示するフローチャートである。なお、以下のフローチャートでは、解析用サーバ装置40が、図5に示されるようなテーブルを参照して付加情報を生成する流れを一例として挙げる。
[Operation example]
FIG. 15 is a flowchart illustrating the flow of processing executed by the analysis server device 40 according to the third embodiment. In the following flowchart, an example in which the analysis server device 40 generates additional information with reference to a table as illustrated in FIG.
 取得部42は、音情報とその音情報に紐付けられている位置情報(地図位置情報)を取得する(S302)。具体的には、取得部42は、図5に示されるようなテーブルを参照し、各レコードに格納されている音情報と、その音情報に紐付けられている地図位置情報とを読み出す。 The acquisition unit 42 acquires sound information and position information (map position information) associated with the sound information (S302). Specifically, the acquisition unit 42 reads the sound information stored in each record and the map position information associated with the sound information with reference to a table as shown in FIG.
 付加情報生成部44は、S302で取得した位置情報に基づいて音情報をグルーピングし、各グループの音情報の個数を集計する(S304)。図5のテーブルを参照する場合においては、音情報は地図位置情報によってグループ分けされている。付加情報生成部44は、地図位置情報によるグループ別に音情報の個数を集計する。 The additional information generation unit 44 groups sound information based on the position information acquired in S302, and totals the number of sound information of each group (S304). When referring to the table of FIG. 5, the sound information is grouped by map position information. The additional information generation unit 44 adds up the number of sound information for each group based on the map position information.
 付加情報生成部44は、グループを1つ選択し(S306)、グループ毎に集計した音情報の個数が所定の閾値(基準)以上であるか否かを判定する(S308)。所定の閾値は、例えば、音情報の解析結果に基づいて生成される付加情報に要求される信頼性に応じた所定の値として、付加情報生成部44のプログラムモジュール上で定義される。具体的には、基準となる閾値が大きいほど、付加情報を生成する際に用いる音情報が多くなる。結果として、複数の音情報の解析結果を用いて生成することによって、付加情報の信頼性を向上させることができる。 The additional information generation unit 44 selects one group (S306), and determines whether or not the number of pieces of sound information counted for each group is equal to or greater than a predetermined threshold (reference) (S308). The predetermined threshold is defined on the program module of the additional information generation unit 44 as a predetermined value corresponding to the reliability required for the additional information generated based on the analysis result of the sound information, for example. Specifically, as the reference threshold value is larger, more sound information is used when generating additional information. As a result, the reliability of the additional information can be improved by using the analysis results of the plurality of sound information.
 グループ毎に集計した音情報の個数が所定の閾値未満である場合(S308:NO)、付加情報生成部44は、未だ選択されていないグループが存在するか否かを判定する(S316)。全てグループが選択済みである場合(S316:NO)、解析用サーバ装置40は処理を終了する。一方、未だ選択されていないグループが存在する場合(S316:YES)、付加情報生成部44はそのグループを選択し(S306)、そのグループの音情報の取得数が所定の閾値以上であるか否かを再度判定する(S308)。 If the number of sound information tabulated for each group is less than the predetermined threshold (S308: NO), the additional information generation unit 44 determines whether there is a group that has not yet been selected (S316). If all the groups have been selected (S316: NO), the analysis server device 40 ends the process. On the other hand, if there is a group that has not yet been selected (S316: YES), the additional information generation unit 44 selects the group (S306), and whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold value. It is determined again (S308).
 グループ毎に集計した音情報の個数が所定の閾値以上である場合(S306:YES)、付加情報生成部44は、そのグループに属する音情報をそれぞれ解析して、音解析情報(例えば、人物、自転車、緊急車両、急ブレーキ音などの検知結果を示す情報)を生成する(S310)。さらに、付加情報生成部44は、音解析情報が示す解析結果の統計に基づいて、付加情報(例えば、歩行者や自転車が多い/少ない、緊急車両の通過頻度が多い/少ない、車両の移動速度が速い/遅い、など)を生成する(S312)。また、付加情報生成部44は、選択したグループに属する音情報に紐付く位置情報を用いて、付加情報を地図情報と関連付ける(S314)。例えば、付加情報生成部44は、S302で読み出した地図位置情報とS312で生成した付加情報とを互いに対応付けて、図13に示されるようなテーブルを生成することによって、付加情報を地図情報に関連付けることができる。 When the number of pieces of sound information aggregated for each group is equal to or greater than a predetermined threshold (S306: YES), the additional information generation unit 44 analyzes each piece of sound information belonging to the group and obtains sound analysis information (for example, a person, Information indicating detection results such as a bicycle, an emergency vehicle, and a sudden brake sound is generated (S310). Further, the additional information generation unit 44 adds additional information (for example, there are many / few pedestrians and bicycles, frequent / frequent passage of emergency vehicles, based on statistics of analysis results indicated by the sound analysis information, vehicle moving speed) Is fast / slow, etc.) (S312). Further, the additional information generation unit 44 associates the additional information with the map information by using the position information associated with the sound information belonging to the selected group (S314). For example, the additional information generation unit 44 associates the map position information read in S302 with the additional information generated in S312 and generates a table as shown in FIG. Can be associated.
 そして、付加情報生成部44は、未だ選択されていないグループが存在するか否かを判定する(S316)。全てグループが選択済みである場合(S316:NO)、解析用サーバ装置40は処理を終了する。一方、未だ選択されていないグループが存在する場合(S316:YES)、付加情報生成部44はそのグループを選択し(S306)、そのグループの音情報の取得数が所定の閾値以上であるか否かを再度判定する(S308)。 Then, the additional information generation unit 44 determines whether there is a group that has not yet been selected (S316). If all the groups have been selected (S316: NO), the analysis server device 40 ends the process. On the other hand, if there is a group that has not yet been selected (S316: YES), the additional information generation unit 44 selects the group (S306), and whether or not the number of acquired sound information of the group is equal to or greater than a predetermined threshold value. It is determined again (S308).
 なお、上述の処理において、取得部42が取得した音情報に対して、音情報が生成された時刻を示す時刻情報、音情報が生成されたときの車両の動作情報、または、音情報が生成されたときの天候情報が更に紐付けられている場合、付加情報生成部44は、これらの情報を用いてより詳細な付加情報を生成することもできる。例えば、付加情報生成部44は、時刻情報や天候情報を更に用いて、時間帯別または天候別の特徴を示す付加情報を生成することができる。また例えば、付加情報生成部44は、動作情報を更に用いて推測できる風の強さを付加情報に含めることができる。 In the above-described processing, time information indicating the time when the sound information is generated, vehicle operation information when the sound information is generated, or sound information is generated for the sound information acquired by the acquisition unit 42. When the weather information at the time of being added is further associated, the additional information generation unit 44 can also generate more detailed additional information using these information. For example, the additional information generation unit 44 can generate additional information indicating characteristics by time zone or weather by further using time information and weather information. Further, for example, the additional information generation unit 44 can include wind strength that can be estimated by further using the operation information in the additional information.
 また、他の例として、取得部42が取得した音情報に対して、音情報が生成されたときの運転者の状態を示す状態情報が更に対応付けられている場合、当該状態情報を付加情報として地図情報に関連付けてもよい。
 例えば、出力部14から、自転車の急ブレーキを示す音情報に、運転者が驚いたことを示す状態情報が対応付けられて出力されてきたとする。このとき、付加情報生成部は、この位置において急ブレーキを聞いた運転者は驚いてしまうことを示す付加情報を生成し、地図情報に付加する。また、出力部14から、自転車のブレーキを示す音情報に、運転者の状態に変化が無い(平静状態)ことを示す状態情報が対応付けられて出力されてきたとする。このとき、付加情報生成部は、この位置において自転車のブレーキ音が聞こえても、運転者の状態の変化が少ないことを示す付加情報を生成し、地図情報に付加する。
 このようにして生成した地図を他の車両が車両の走行に用いることで、走行先で生じる車外音により運転者がどのような反応を示すかを予め予測することができるため、このような予測に基づいた車両制御を行うことが可能である。
As another example, when the sound information acquired by the acquisition unit 42 is further associated with state information indicating the state of the driver when the sound information is generated, the state information is added to the additional information. May be associated with the map information.
For example, it is assumed that state information indicating that the driver was surprised is output from the output unit 14 in association with sound information indicating sudden braking of the bicycle. At this time, the additional information generation unit generates additional information indicating that the driver who has heard the sudden braking at this position is surprised, and adds the additional information to the map information. Further, it is assumed that state information indicating that there is no change in the driver's state (a calm state) is output from the output unit 14 in association with the sound information indicating the bicycle brake. At this time, the additional information generation unit generates additional information indicating that there is little change in the state of the driver even if the brake sound of the bicycle is heard at this position, and adds the additional information to the map information.
Since the map generated in this way is used by other vehicles for driving the vehicle, it is possible to predict in advance what kind of reaction the driver will react with the external sound generated at the destination. It is possible to perform vehicle control based on the above.
 [第4実施形態]
 本実施形態の解析用サーバ装置40は、以下の点を除き、第3実施形態と同様である。
[Fourth Embodiment]
The analysis server device 40 of this embodiment is the same as that of the third embodiment except for the following points.
 〔機能構成〕
 図16は、第4実施形態における解析用サーバ装置40の機能構成を概念的に示すブロック図である。図16に示されるように、本実施形態の解析用サーバ装置40は、第3実施形態の構成に加え、表示出力部46を更に有する。
[Function configuration]
FIG. 16 is a block diagram conceptually showing the functional structure of the analysis server device 40 in the fourth embodiment. As shown in FIG. 16, the analysis server device 40 of the present embodiment further includes a display output unit 46 in addition to the configuration of the third embodiment.
 表示出力部46は、地図情報の表示範囲を指定する範囲情報を取得し、当該範囲情報が示す範囲に関連付けられている付加情報を読み出す。表示出力部46は、例えば、解析用サーバ装置40に接続される入力装置や、解析用サーバ装置40とネットワークインタフェース412を介して接続される他の装置(例えば、車両に搭載されるナビゲーション装置やユーザPC(Personal Computer)など)から、地図情報の表示範囲を指定する範囲情報を取得することができる。また表示出力部46は、読み出した付加情報に基づいて、地図情報に重ねて表示する描画データを生成して出力する。描画データの出力先は、解析用サーバ装置40に接続されるディスプレイ装置や、ネットワークインタフェース412を介して接続される他の装置である。描画データは、付加情報を可視化するものであり、具体的には、音量レベルの分布図、音源の分布図、車両の平均移動速度の分布図などを描画するためのデータである。 The display output unit 46 acquires range information that specifies the display range of the map information, and reads additional information associated with the range indicated by the range information. The display output unit 46 is, for example, an input device connected to the analysis server device 40 or another device connected to the analysis server device 40 via the network interface 412 (for example, a navigation device mounted on a vehicle, The range information that specifies the display range of the map information can be acquired from a user PC (Personal Computer) or the like. The display output unit 46 generates and outputs drawing data to be displayed overlaid on the map information based on the read additional information. The output destination of the drawing data is a display device connected to the analysis server device 40 or another device connected via the network interface 412. The drawing data is used to visualize additional information. Specifically, the drawing data is data for drawing a volume level distribution map, a sound source distribution map, a vehicle average moving speed distribution map, and the like.
 表示出力部46は、例えば、図17から図22に例示されるような情報を表示させる描画データを生成する。図17から図22は、表示出力部46が生成する描画データの一例を示す図である。 The display output unit 46 generates drawing data for displaying information as exemplified in FIGS. 17 to 22, for example. FIGS. 17 to 22 are diagrams illustrating an example of the drawing data generated by the display output unit 46.
 図17では、表示出力部46が、指定された表示範囲における音量レベルの分布を示す描画データを生成して出力する例を示している。表示出力部46は、例えば図13に示されるようなテーブルを参照して、地図上の位置毎に格納されている付加情報を読み出す。そして、表示出力部46は、地図上の位置毎の付加情報(例:音量レベルが大きい/小さいなど)に基づいて、位置毎の音量レベルを特定する。そして、表示出力部46は、位置毎の音量レベルを繋ぎ合わせて、音量レベルの分布を示す描画データを生成する。そして、表示出力部46により生成された描画データによって、例えば図17に示されるような画面が表示される。このような画面によって、画面を閲覧する人物が場所毎の音量レベル(すなわち、騒音レベル)を容易に把握することができる。 FIG. 17 shows an example in which the display output unit 46 generates and outputs drawing data indicating the volume level distribution in the designated display range. The display output unit 46 reads the additional information stored for each position on the map with reference to, for example, a table as shown in FIG. Then, the display output unit 46 specifies the volume level for each position based on the additional information for each position on the map (eg, the volume level is large / small). Then, the display output unit 46 connects the volume levels for each position to generate drawing data indicating the distribution of the volume levels. Then, for example, a screen as shown in FIG. 17 is displayed by the drawing data generated by the display output unit 46. With such a screen, a person browsing the screen can easily grasp the volume level (that is, the noise level) for each place.
 また、付加情報が周波数帯別の音量レベルを示す情報を内包している場合、表示出力部46は、例えば図17の符号170で示されるような入力フィールドを介して周波数帯を指定する情報を更に取得し、指定された周波数帯における音量レベルの分布を示す描画データを生成して出力してもよい。これにより、各場所でどの周波数帯の音がどの程度の音量で生じているかを容易に把握することができる。周波数帯別の音量は、その場所の特性を知るための有益な情報となる。例えば、低域の音量が大きい場所は、不快感を与える又は健康被害の恐れがある、低周波音や超低周波音がそれなりの頻度で放出されている可能性のある場所と推測することができる。また例えば、高域の音量が大きい場所は、工事の金属音や自転車のブレーキ音など、高周波音を発する事象が頻繁に発生する可能性が高い場所であると推測することができる。 Further, when the additional information includes information indicating the volume level for each frequency band, the display output unit 46 displays information for designating the frequency band via an input field as indicated by reference numeral 170 in FIG. Further, it may be obtained, and drawing data indicating a volume level distribution in a specified frequency band may be generated and output. As a result, it is possible to easily grasp which frequency band of sound is generated at what volume at each location. The volume for each frequency band is useful information for knowing the characteristics of the place. For example, a place with a high volume in the low range may be inferred as a place where low frequency sounds or very low frequency sounds may be emitted at a reasonable frequency, which may cause discomfort or damage to health. it can. Further, for example, it can be estimated that a place where the volume of the high frequency is high is a place where a high frequency sound such as a construction metal sound or a bicycle brake sound is likely to occur frequently.
 また、図17の例に限らず、付加情報が音情報の周波数帯の解析結果を示す情報を内包している場合、表示出力部46は、範囲情報が示す範囲における音情報の周波数帯の分布を示す描画データを出力することもできる。例えば、付加情報生成部44は、各地点で収集された音情報の周波数の解析結果から、周波数の統計値(例えば、平均値、中間値など)を示す付加情報を生成することができる。表示出力部46は、このように生成された付加情報に基づいて、音情報の周波数帯の分布を示す描画データを生成することができる。この場合、例えば、図17の表示情報を選択するための入力フィールドには「周波数分布」といった項目が追加される。このようにして表示される周波数帯の分布も、その場所の特性を推測し得る有益な情報となる。 In addition to the example of FIG. 17, when the additional information includes information indicating the analysis result of the frequency band of the sound information, the display output unit 46 distributes the frequency band of the sound information in the range indicated by the range information. It is also possible to output drawing data indicating. For example, the additional information generation unit 44 can generate additional information indicating a statistical value (for example, an average value, an intermediate value, etc.) of the frequency from the analysis result of the frequency of the sound information collected at each point. The display output unit 46 can generate drawing data indicating the frequency band distribution of the sound information based on the additional information generated as described above. In this case, for example, an item “frequency distribution” is added to the input field for selecting the display information of FIG. The frequency band distribution displayed in this way is also useful information for estimating the characteristics of the place.
 図18では、表示出力部46が、指定された表示範囲における音源の検出頻度の分布を示す描画データを生成して出力する例を示している。表示出力部46は、例えば図13に示されるようなテーブルを参照して、地図上の位置毎に格納されている付加情報を読み出す。そして、表示出力部46は、地図上の位置毎の付加情報(例:歩行者が多い/少ない、自転車が多い/少ない、緊急車両の通過頻度が高い/低い、急ブレーキの発生頻度が高い/低いなど)に基づいて、検出頻度の高い音源が存在するか否かを位置毎に判別する。そして、表示出力部46は、検出頻度の高い音源が存在する位置に対して当該音源に対応するアイコンを表示させる描画データを生成する。そして、表示出力部46により生成された描画データによって、例えば図18に示されるような画面が表示される。このような画面によって、画面を閲覧する人物が、歩行者や自転車の多い場所、緊急車両が頻繁に通る場所、急ブレーキが頻繁に発生する場所といった、注意が必要な場所を容易に把握することができる。 18 shows an example in which the display output unit 46 generates and outputs drawing data indicating the distribution of the detection frequency of the sound source in the designated display range. The display output unit 46 reads the additional information stored for each position on the map with reference to, for example, a table as shown in FIG. Then, the display output unit 46 adds additional information for each position on the map (eg, there are many / few pedestrians, many / few bicycles, high / low emergency vehicle passing frequency, high frequency of sudden braking / For each position based on whether or not a sound source with a high detection frequency exists. And the display output part 46 produces | generates the drawing data which display the icon corresponding to the said sound source with respect to the position where the sound source with a high detection frequency exists. Then, for example, a screen as shown in FIG. 18 is displayed by the drawing data generated by the display output unit 46. This screen makes it easy for the person viewing the screen to find places that require attention, such as places where there are many pedestrians and bicycles, places where emergency vehicles frequently pass, and places where sudden braking frequently occurs. Can do.
 表示出力部46は、例えば図18の符号180で示されるような入力フィールドを介して音源の種類を指定する情報を更に取得し、指定された種類の音源の検出頻度の分布を示す描画データを生成して出力してもよい。なお、入力フィールドは、図18の例に限らず、チェックボックスのように、複数の種類を指定できるように構成されていてもよい。このように画面に表示する情報を絞り込むことによって、画面を閲覧する人物が所望の情報を見つけ易くなる。 The display output unit 46 further acquires, for example, information for designating the type of the sound source via the input field as indicated by reference numeral 180 in FIG. 18, and draws the drawing data indicating the distribution of the detection frequency of the designated type of sound source. It may be generated and output. The input field is not limited to the example of FIG. 18 and may be configured such that a plurality of types can be specified, such as a check box. By narrowing down the information displayed on the screen in this way, it becomes easier for a person browsing the screen to find desired information.
 図19乃至図21の例では、図18の指定された表示範囲における音源の検出頻度の分布を示す描画データに加え、音源が運転者に影響を及ぼすかどうかを示す描画データを生成して出力する例を示している。表示出力部46は、例えば地図上の位置毎に格納されている、運転者の状態を示す付加情報を読み出す。そして、表示出力部46は、地図上の位置毎の付加情報に基づいて、運転者の状態に対応する描画データを生成する。運転者の状態に対応する描画データとは、言い換えれば、その位置の音源が運転者に影響を及ぼす者か否かを示すものである。そして、表示出力部46により生成された描画データによって、例えば図19乃至図21に示されるような画面が表示される。具体的には、図19では、音源が運転者に影響を及ぼすかどうかを示す描画データの一例として、アイコン191およびアイコン192が示されている。アイコン191は、音源が運転者に影響を及ぼさない或いは音源が運転者に及ぼす影響が小さいことを示すアイコンである。また、アイコン192は、音源が運転者に影響を及ぼす虞があることを示すアイコンである。なお、表示出力部46により表示される「音源が運転者に影響を及ぼすかどうかを示す描画データ」は、図19の例に限定されない。例えば、図20に示されるように、表示出力部46は、人物の表情の変化を示すようなアイコン201および202を表示してもよい。その他の例として、図21に示されるように、表示出力部46は、心拍数の変化を示すようなアイコン211および212を表示してもよい。また、図19乃至図21の例において、運転手に及ぼす影響の度合いが3段階以上に設定されており、その段階に応じたアイコンが表示されてもよい。この場合、表示出力部46が、地図上の位置毎の付加情報によって示される段階に基づいて、表示すべきアイコンを特定することができる。図19乃至図21に例示したような画面によって、画面を閲覧する人物が、これから走行する位置に現れる音源が、運転者自身に影響を及ぼすかどうかを容易に把握することができ、また、車両の運転支援に寄与することができる。 In the example of FIGS. 19 to 21, in addition to the drawing data indicating the distribution of the detection frequency of the sound source in the designated display range of FIG. 18, drawing data indicating whether the sound source affects the driver is generated and output. An example is shown. The display output unit 46 reads, for example, additional information indicating the driver's state stored for each position on the map. And the display output part 46 produces | generates the drawing data corresponding to a driver | operator's state based on the additional information for every position on a map. In other words, the drawing data corresponding to the driver's state indicates whether or not the sound source at that position affects the driver. Then, screens such as those shown in FIGS. 19 to 21 are displayed by the drawing data generated by the display output unit 46, for example. Specifically, in FIG. 19, an icon 191 and an icon 192 are shown as an example of drawing data indicating whether the sound source affects the driver. The icon 191 is an icon indicating that the sound source does not affect the driver or that the sound source has little effect on the driver. The icon 192 is an icon indicating that the sound source may affect the driver. The “drawing data indicating whether the sound source affects the driver” displayed by the display output unit 46 is not limited to the example of FIG. For example, as shown in FIG. 20, the display output unit 46 may display icons 201 and 202 that indicate changes in the facial expression of a person. As another example, as illustrated in FIG. 21, the display output unit 46 may display icons 211 and 212 indicating changes in heart rate. In the examples of FIGS. 19 to 21, the degree of influence on the driver is set to three or more levels, and icons corresponding to the levels may be displayed. In this case, the display output unit 46 can specify the icon to be displayed based on the stage indicated by the additional information for each position on the map. The screens illustrated in FIGS. 19 to 21 allow a person who views the screen to easily understand whether the sound source that appears in the position where he / she will travel from affects the driver himself / herself. Can contribute to driving support.
 図22の例では、表示出力部46は、指定された表示範囲における車両の平均速度の分布を示す描画データを生成して出力する例を示している。表示出力部46は、例えば図13に示されるようなテーブルを参照して、地図上の位置毎の付加情報を読み出す。そして、表示出力部46は、地図上の位置毎に格納された付加情報(例:車両の平均速度が速い/遅い)に基づいて、位置毎の車両の平均速度を特定する。そして、表示出力部46は、位置毎の車両の平均速度を繋ぎ合わせて、車両の平均速度の分布を示す描画データを生成する。そして、表示出力部46により生成された描画データによって、例えば図22に示されるような画面が表示される。このような画面によって、画面を閲覧する人物が場所毎の車両の平均速度を容易に把握することができる。また、画面を閲覧する人物が、渋滞が発生し易いか否かを場所毎に判別することができる。 22 shows an example in which the display output unit 46 generates and outputs drawing data indicating the distribution of the average speed of the vehicle in the designated display range. The display output unit 46 reads additional information for each position on the map with reference to, for example, a table as shown in FIG. And the display output part 46 specifies the average speed of the vehicle for every position based on the additional information (Example: Average speed of a vehicle is quick / slow) stored for every position on a map. And the display output part 46 connects the average speed of the vehicle for every position, and produces | generates the drawing data which show distribution of the average speed of a vehicle. Then, for example, a screen as shown in FIG. 22 is displayed by the drawing data generated by the display output unit 46. With such a screen, a person browsing the screen can easily grasp the average speed of the vehicle at each location. In addition, it is possible to determine, for each place, whether or not a person browsing the screen is likely to be congested.
 〔ハードウエア構成〕
 本実施形態の解析用サーバ装置40は、第3実施形態と同様に、図14に示されるようなハードウエア構成を有する。本実施形態のストレージデバイス408は、表示出力部46を実現するためのプログラムモジュールを更に記憶している。プロセッサ404は、この表示出力部46のプログラムモジュールをメモリ406に読み出して実行することで、表示出力部46の機能を実現する。
[Hardware configuration]
Similar to the third embodiment, the server device for analysis 40 of this embodiment has a hardware configuration as shown in FIG. The storage device 408 of this embodiment further stores a program module for realizing the display output unit 46. The processor 404 reads out the program module of the display output unit 46 to the memory 406 and executes it, thereby realizing the function of the display output unit 46.
 以上、図面を参照して実施形態及び実施例について述べたが、これらは本発明の例示であり、上記以外の様々な構成を採用することもできる。 As mentioned above, although embodiment and the Example were described with reference to drawings, these are the illustrations of this invention, Various structures other than the above are also employable.
 この出願は、2016年11月30日に出願された日本出願特願2016-233488号を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2016-233488 filed on Nov. 30, 2016, the entire disclosure of which is incorporated herein.

Claims (10)

  1.  移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する取得部と、
     前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する出力部と、
     を備える情報処理装置。
    Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated An acquisition unit to acquire;
    Generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination;
    An information processing apparatus comprising:
  2.  前記音情報を解析して音解析情報を生成する音解析部を更に備え、
     前記出力部は、前記音情報から生成される前記音解析情報と、前記移動体の位置情報とを紐付けた出力情報を生成し、当該出力情報を前記所定の出力先に出力する、
     請求項1に記載の情報処理装置。
    A sound analysis unit that analyzes the sound information and generates sound analysis information;
    The output unit generates output information in which the sound analysis information generated from the sound information and the position information of the moving body are linked, and outputs the output information to the predetermined output destination.
    The information processing apparatus according to claim 1.
  3.  前記音解析部は、前記音情報の音量を示す前記音解析情報を生成する、
     請求項2に記載の情報処理装置。
    The sound analysis unit generates the sound analysis information indicating a volume of the sound information;
    The information processing apparatus according to claim 2.
  4.  前記音解析部は、前記音情報の周波数別の音量を示す前記音解析情報を生成する、
     請求項3に記載の情報処理装置。
    The sound analysis unit generates the sound analysis information indicating a volume for each frequency of the sound information.
    The information processing apparatus according to claim 3.
  5.  前記取得部は、前記音情報を生成した時刻を示す時刻情報を更に取得し、
     前記出力部は、前記時刻情報を前記出力情報に更に紐付けて前記所定の出力先へ出力する、
     請求項1~4のいずれか1項に記載の情報処理装置。
    The acquisition unit further acquires time information indicating a time when the sound information is generated,
    The output unit further associates the time information with the output information and outputs the information to the predetermined output destination.
    The information processing apparatus according to any one of claims 1 to 4.
  6.  前記取得部は、前記移動体の動作を示す動作情報を更に取得し、
     前記出力部は、前記動作情報を前記出力情報に更に紐付けて前記所定の出力先へ出力する、
     請求項1~4のいずれか1項に記載の情報処理装置。
    The acquisition unit further acquires operation information indicating the operation of the mobile object,
    The output unit further associates the operation information with the output information and outputs the output information to the predetermined output destination.
    The information processing apparatus according to any one of claims 1 to 4.
  7.  前記取得部は、前記音情報を生成したときの天候を示す天候情報を更に取得し、
     前記出力部は、前記天候情報を前記出力情報に更に紐付けて前記所定の出力先へ出力する、
     請求項1~6のいずれか1項に記載の情報処理装置。
    The acquisition unit further acquires weather information indicating the weather when the sound information is generated,
    The output unit further links the weather information to the output information and outputs the information to the predetermined output destination.
    The information processing apparatus according to any one of claims 1 to 6.
  8.  前記取得部は、前記音情報を生成したときの前記移動体の搭乗者の状態を示す状態情報を取得し、
     前記出力部は、前記状態情報を前記出力情報に更に対応付けて前記所定の出力先へ出力する、
     請求項1~7のいずれか1項に記載の情報処理装置。
    The acquisition unit acquires state information indicating a state of a passenger of the moving body when the sound information is generated,
    The output unit further outputs the state information in association with the output information to the predetermined output destination.
    The information processing apparatus according to any one of claims 1 to 7.
  9.  コンピュータが、
     移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する工程と、
     前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する工程と、
     を含む情報収集方法。
    Computer
    Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated A process of acquiring;
    Generating output information in which the sound information and the position information are associated, and outputting the output information to a predetermined output destination;
    Information collection method including
  10.  コンピュータを、
     移動体に搭載された集音装置により集音され、前記移動体の外部で発生した外部音に基づき生成された音情報と、当該音情報が生成されたときの前記移動体の位置情報とを取得する手段、及び、
     前記音情報と前記位置情報とを対応付けた出力情報を生成し、当該出力情報を所定の出力先へ出力する手段、
     として機能させるためのプログラム。
    Computer
    Sound information collected by a sound collecting device mounted on the moving body and generated based on an external sound generated outside the moving body, and position information of the moving body when the sound information is generated Means for obtaining, and
    Means for generating output information in which the sound information and the position information are associated with each other, and outputting the output information to a predetermined output destination;
    Program to function as.
PCT/JP2017/043135 2016-11-30 2017-11-30 Information processing device, information collection method, and program WO2018101429A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2016-233488 2016-11-30
JP2016233488 2016-11-30

Publications (1)

Publication Number Publication Date
WO2018101429A1 true WO2018101429A1 (en) 2018-06-07

Family

ID=62241693

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/043135 WO2018101429A1 (en) 2016-11-30 2017-11-30 Information processing device, information collection method, and program

Country Status (1)

Country Link
WO (1) WO2018101429A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020046593A (en) * 2018-09-21 2020-03-26 パイオニア株式会社 Data structure, storage medium, storage device
JP2020046594A (en) * 2018-09-21 2020-03-26 パイオニア株式会社 Data structure, storage medium, and storage device
JP2020140379A (en) * 2019-02-27 2020-09-03 トヨタ自動車株式会社 Operation support system
JP2020144404A (en) * 2019-03-04 2020-09-10 トヨタ自動車株式会社 Driving support system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009068890A (en) * 2007-09-11 2009-04-02 Pioneer Electronic Corp Device, method and program for recording environmental sound pressure
JP2012073088A (en) * 2010-09-28 2012-04-12 Sony Corp Position information providing device, position information providing method, position information providing system and program
WO2013121464A1 (en) * 2012-02-16 2013-08-22 三菱電機株式会社 Sound-emitting apparatus
JP2015191256A (en) * 2014-03-27 2015-11-02 パイオニア株式会社 Risk degree determination device, risk degree determination method and risk degree determination program
US20150338227A1 (en) * 2012-11-22 2015-11-26 Freescale Semiconductor, Inc. Navigation system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009068890A (en) * 2007-09-11 2009-04-02 Pioneer Electronic Corp Device, method and program for recording environmental sound pressure
JP2012073088A (en) * 2010-09-28 2012-04-12 Sony Corp Position information providing device, position information providing method, position information providing system and program
WO2013121464A1 (en) * 2012-02-16 2013-08-22 三菱電機株式会社 Sound-emitting apparatus
US20150338227A1 (en) * 2012-11-22 2015-11-26 Freescale Semiconductor, Inc. Navigation system
JP2015191256A (en) * 2014-03-27 2015-11-02 パイオニア株式会社 Risk degree determination device, risk degree determination method and risk degree determination program

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020046593A (en) * 2018-09-21 2020-03-26 パイオニア株式会社 Data structure, storage medium, storage device
JP2020046594A (en) * 2018-09-21 2020-03-26 パイオニア株式会社 Data structure, storage medium, and storage device
JP2020140379A (en) * 2019-02-27 2020-09-03 トヨタ自動車株式会社 Operation support system
US11409305B2 (en) 2019-02-27 2022-08-09 Toyota Jidosha Kabushiki Kaisha Driving assistance system
JP7120077B2 (en) 2019-02-27 2022-08-17 トヨタ自動車株式会社 driving support system
JP2020144404A (en) * 2019-03-04 2020-09-10 トヨタ自動車株式会社 Driving support system
JP7133155B2 (en) 2019-03-04 2022-09-08 トヨタ自動車株式会社 driving support system

Similar Documents

Publication Publication Date Title
WO2018101429A1 (en) Information processing device, information collection method, and program
US11308785B1 (en) Systems and methods for the mitigation of drowsy or sleepy driving
US20170129497A1 (en) System and method for assessing user attention while driving
KR20190115040A (en) Methods, devices, equipment and storage media for determining driving behavior
US11615476B2 (en) Information processing device and method
WO2015151594A1 (en) Driving assistance system, method, and program
US20130131893A1 (en) Vehicle-use information collection system
WO2017160663A1 (en) Traffic pollution mapper
KR101744963B1 (en) Traffic information and vehicle information providing system using black box video data and method thereof
JP2012048310A (en) Driving support system, on-vehicle device and information distribution device
JP2018136754A (en) Information processing apparatus, and mobility data collecting system
WO2018101430A1 (en) Server device, analysis method, and program
US20210229674A1 (en) Driver profiling and identification
KR102268134B1 (en) Apparatus and method for warning vehicle collision by using mobile data and infra data
US10255803B2 (en) Vehicle image data transmission device
CN113352989A (en) Intelligent driving safety auxiliary method, product, equipment and medium
KR102101975B1 (en) Real-time system and method for warning rear-end collision
JP2018129585A (en) Monitoring system and monitoring method
JP2005305003A (en) Operation state confirming device and operation state confirming system
JP4315073B2 (en) Failure analysis system
JP7347390B2 (en) Driving evaluation device, driving evaluation system, and driving evaluation program
CN112700138A (en) Method, device and system for road traffic risk management
JP6265525B2 (en) Driving related information sharing system and driver terminal
CN112740298B (en) Information providing system, server, method, and computer-readable storage medium
JP5388705B2 (en) Driving diagnosis device and driving diagnosis method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17876827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17876827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP