WO2022113261A1 - 情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 - Google Patents
情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 Download PDFInfo
- Publication number
- WO2022113261A1 WO2022113261A1 PCT/JP2020/044197 JP2020044197W WO2022113261A1 WO 2022113261 A1 WO2022113261 A1 WO 2022113261A1 JP 2020044197 W JP2020044197 W JP 2020044197W WO 2022113261 A1 WO2022113261 A1 WO 2022113261A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- recognition model
- information
- server
- specific scene
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 30
- 230000005540 biological transmission Effects 0.000 claims abstract description 65
- 238000013480 data collection Methods 0.000 claims abstract description 41
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 230000001133 acceleration Effects 0.000 claims description 5
- 238000004891 communication Methods 0.000 description 26
- 230000002093 peripheral effect Effects 0.000 description 14
- 238000012544 monitoring process Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 7
- 239000004065 semiconductor Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000005856 abnormality Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000446 fuel Substances 0.000 description 1
- 238000002347 injection Methods 0.000 description 1
- 239000007924 injection Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C5/00—Registering or indicating the working of vehicles
- G07C5/008—Registering or indicating the working of vehicles communicating information to a remotely located station
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16Y—INFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
- G16Y10/00—Economic sectors
- G16Y10/40—Transportation
Definitions
- This disclosure relates to information collection systems, servers, vehicles, information collection methods, information transmission methods, and computer-readable media.
- Patent Document 1 discloses a data collection system that collects road information from an in-vehicle device mounted on each vehicle.
- the in-vehicle device described in Patent Document 1 is connected to various sensors of a vehicle, and detects an abnormality based on signals input from various sensors.
- Collection conditions include target vehicle conditions, recording triggers, and collection content.
- the in-vehicle device sends the generated collection conditions and the data collection request that meets the collection conditions to the data collection device.
- the data collection device collects the data based on the received collection conditions.
- the data collection device transmits a collection condition file including collection conditions generated by the vehicle-mounted device that sent the data collection request to the vehicle-mounted device of each vehicle.
- the in-vehicle device of each vehicle transmits data satisfying the collection conditions to the data collection device.
- the data collection device collects data satisfying the collection conditions. Therefore, the data can be collected more efficiently than the case where all the data is collected from the in-vehicle device.
- the in-vehicle device transmits a data collection request triggered by a condition such as the sensor value exceeding the threshold value. Therefore, the data collection device cannot collect data when the vehicle is in a situation corresponding to a specific scene.
- the present disclosure discloses an information collection system, a server, a vehicle, an information collection method, an information transmission method, and a computer that can cause a server to collect data when the vehicle is in a situation corresponding to a specific scene.
- the purpose is to provide a readable medium.
- the present disclosure provides an information gathering system as a first aspect.
- the information gathering system has a server and a vehicle connected to the server via a network.
- the server transmits the recognition model selection means for selecting a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on sensor information, and the recognition model to the vehicle.
- It has a transmission means for collecting information and a data collection means for collecting information transmitted from the vehicle.
- the vehicle corresponds to a scene determination means for determining whether or not the vehicle is in a situation corresponding to a specific scene based on the recognition model received from the server and sensor information, and the vehicle corresponds to the specific scene.
- It has a data transmission means for transmitting information to the server when it is determined that the situation is present.
- the present disclosure provides a server as a second aspect.
- the server transmits a recognition model selection means for selecting a recognition model for identifying a vehicle in a situation corresponding to a specific scene based on sensor information, and the recognition model to the vehicle via a network. It has a transmitting means and a data collecting means for collecting information from the vehicle when it is determined that the vehicle is in a situation corresponding to a specific scene based on the recognition model and sensor information in the vehicle.
- the present disclosure provides a vehicle as a third aspect.
- the vehicle is a specific vehicle based on the sensor information and a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on the sensor information received from the server via the network.
- a scene determination means for determining whether or not the vehicle is in a situation corresponding to a scene, and data transmission for transmitting information to the server via a network when it is determined that the vehicle is in a situation corresponding to a specific scene.
- Has means.
- the present disclosure provides an information collection method as a fourth aspect.
- the information gathering method selects a recognition model for identifying a vehicle in a situation corresponding to a specific scene based on sensor information, transmits the recognition model to the vehicle via a network, and transmits the recognition model to the vehicle.
- the present invention includes collecting information from the vehicle when it is determined that the vehicle is in a situation corresponding to a specific scene based on the recognition model and the sensor information.
- the present disclosure provides an information transmission method as a fifth aspect.
- the information transmission method is based on the recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on the sensor information received from the server via the network, and the sensor information. It includes determining whether or not the vehicle is in a situation corresponding to a specific scene, and if it is determined that the vehicle is in a situation corresponding to a specific scene, transmitting information to the server via a network.
- the present disclosure provides a computer-readable medium as a sixth aspect.
- the computer-readable medium selects a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on sensor information, transmits the recognition model to the vehicle via a network, and transmits the recognition model to the vehicle.
- a program for causing a computer to execute a process of collecting information from the vehicle is stored.
- the present disclosure provides a computer-readable medium as a seventh aspect.
- the computer-readable medium is a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on the sensor information received from the server via the network, and the vehicle is based on the sensor information. It is determined whether or not the vehicle is in a situation corresponding to a specific scene, and if it is determined that the vehicle is in a situation corresponding to a specific scene, a process of transmitting information to the server via a network is performed by the processor. Stores the program to be executed.
- the information collection system, server, vehicle, information collection method, information transmission method, and computer-readable medium according to this disclosure can cause the server to collect data when the vehicle is in a situation corresponding to a specific scene.
- a block diagram schematically showing an information collection system according to the present disclosure The block diagram which shows the data collection system which concerns on 1st Embodiment of this disclosure.
- a block diagram showing a server configuration example A block diagram showing a configuration example of a vehicle.
- a flowchart showing the operation procedure in the server A flowchart showing an operation procedure in a vehicle.
- FIG. 1 schematically shows an information collection system according to the present disclosure.
- the information collection system 10 has a server 20 and a vehicle 30.
- the server 20 and the vehicle 30 are connected to each other via a network.
- the server 20 has a recognition model selection means 21, a transmission means 22, and a data collection means 23.
- the vehicle 30 has a scene determination means 31 and a data transmission means 32.
- the recognition model selection means 21 of the server 20 selects a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on the sensor information.
- the transmission means 22 transmits the recognition model selected by the recognition model selection means 21 to the vehicle 30.
- the scene determination means 31 of the vehicle 30 determines whether or not the vehicle 30 is in a situation corresponding to a specific scene based on the recognition model received from the server 20 and the sensor information. When it is determined that the vehicle 30 is in a situation corresponding to a specific scene, the data transmission means 32 transmits information to the server 20. The data collecting means 23 of the server 20 collects the information transmitted from the vehicle 30.
- the server 20 transmits the recognition model to the vehicle 30.
- the vehicle 30 determines whether or not the vehicle is in a situation corresponding to a specific scene based on the received recognition model and the sensor information.
- the vehicle 30 transmits information to the server 20.
- the server 20 can specify a scene in which information is transmitted to the vehicle 30 through a recognition model transmitted to the vehicle 30. Therefore, the server 20 can collect data when the vehicle 30 is in a situation corresponding to a specific scene.
- FIG. 2 shows a data collection system according to the first embodiment of the present disclosure.
- the data collection system 100 includes a server 110 and one or more vehicles 200.
- the server 110 is connected to the vehicle 200 via the network 150.
- the network 150 includes, for example, a wireless communication network using a communication line standard such as LTE (Long Term Evolution).
- the network 150 may include a wireless communication network such as WiFi® or a 5th generation mobile communication system.
- the data collection system 100 corresponds to the information collection system 10 shown in FIG.
- the server 110 corresponds to the server 20 shown in FIG.
- the vehicle 200 corresponds to the vehicle 30 shown in FIG.
- FIG. 3 shows a configuration example of the server 110.
- the server 110 includes a recognition model selection unit 111, a parameter determination unit 112, a transmission unit 113, a data acquisition unit 114, and an analysis device 115.
- the server 110 is located, for example, on a connected service infrastructure.
- the recognition model selection unit 111 selects a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on the sensor information.
- the recognition model includes, for example, at least one of a recognition model for discriminating a scene of road rage, a recognition model for discriminating a scene of ignoring a signal, and a recognition model for discriminating a scene of dozing.
- the recognition model is configured as, for example, an artificial intelligence (AI) model.
- the recognition model includes, for example, a convolutional neural network (CNN).
- the recognition model selection unit 111 selects a recognition model to be transmitted to the vehicle 200, for example, based on the position information of the vehicle 200.
- the recognition model selection unit 111 selects a recognition model by using, for example, regional characteristic information in which a geographical position is associated with a specific scene.
- the regional characteristic information stores information indicating a specific scene to be recognized, for example, for each section divided into meshes at predetermined distances.
- the recognition model selection unit 111 acquires information indicating a specific scene associated with the position of the vehicle 200 from the regional characteristic information.
- the recognition model selection unit 111 selects a recognition model for identifying a specific scene indicated by the acquired information as a recognition model to be transmitted to the vehicle 200.
- the recognition model selection unit 111 may select a recognition model according to the type of road on which the vehicle 200 is traveling.
- Road types include, for example, motorways such as highways, arterial roads, urban roads, and suburban roads.
- the recognition model selection unit 111 may select a rear-end collision scene determination model, for example, when the vehicle 200 is traveling on a road on which a single road without an intersection continues, such as an expressway.
- the recognition model selection unit 111 may select a recognition model according to the route (route name) on which the vehicle is traveling, such as national highway No. 1 or prefectural highway No. 55. Further, the recognition model selection unit 111 may select the recognition model according to the combination of the area such as the prefecture where the vehicle 200 is traveling and the road type or the route name. The recognition model selection unit 111 may select the recognition model specified by the operator as the recognition model to be transmitted to the vehicle 200.
- route route name
- prefectural highway No. 55 such as national highway No. 1 or prefectural highway No. 55.
- the recognition model selection unit 111 may select the recognition model according to the combination of the area such as the prefecture where the vehicle 200 is traveling and the road type or the route name.
- the recognition model selection unit 111 may select the recognition model specified by the operator as the recognition model to be transmitted to the vehicle 200.
- the recognition model selection unit 111 may select a recognition model for each vehicle. For example, the recognition model selection unit 111 can select different recognition models for one vehicle 200 and another vehicle 200. The recognition model selection unit 111 may select a plurality of recognition models for one vehicle 200 as recognition models to be transmitted to the vehicle 200. For example, the recognition model selection unit 111 may select a recognition model for discriminating a scene of road rage and a recognition model for discriminating a scene of dozing as a recognition model for transmitting to one vehicle 200. good. The recognition model selection unit 111 corresponds to the recognition model selection means 21 shown in FIG.
- the parameter determination unit (parameter determination means) 112 determines a parameter that specifies information to be acquired from the vehicle 200 based on the recognition model selected by the recognition model selection unit 111.
- the parameter determination unit 112 holds, for example, a discriminative model or a table in which a situation identified using the discriminative model and a type of information to be acquired are associated with each other.
- the parameter determination unit 112 refers to the table and determines the parameters according to the selected recognition model.
- the transmission unit 113 transmits the recognition model selected by the recognition model selection unit 111 to the vehicle 200. Further, the transmission unit 113 transmits a parameter sheet describing the parameters (type of information) determined by the parameter determination unit 112 to the vehicle 200. The transmission unit 113 arranges the recognition model and the parameter sheet in the vehicle 200 by using, for example, OTA (Over The Air) technology. The transmission unit 113 corresponds to the transmission means 22 shown in FIG.
- the data collection unit 114 collects the information transmitted from each vehicle 200 from each vehicle 200.
- the data collection unit 114 corresponds to the data collection means 23 shown in FIG.
- the analyzer 115 performs analysis using the information collected by the data collection unit 114.
- the analyzer 115 analyzes, for example, using the information collected from the vehicle 200 for each situation identified using the recognition model.
- the analyzer 115 does not necessarily have to be configured inside the server 110, and may be configured as a device physically different from the server 110.
- FIG. 4 shows a configuration example of the vehicle 200.
- the vehicle 200 includes a peripheral monitoring sensor 201, a vehicle sensor 202, a vehicle control ECU (Electric Control Unit) 203, a scene recognition unit 204, and a communication device 205.
- these components are configured to be able to communicate with each other via an in-vehicle LAN (Local Area Network), CAN (Controller Area Network), and the like.
- LAN Local Area Network
- CAN Controller Area Network
- the peripheral monitoring sensor 201 is a sensor that monitors the peripheral condition of the vehicle 200.
- Peripheral monitoring sensor 201 includes, for example, a camera, radar, LiDAR (Light Detection and Ranging), and the like.
- Peripheral monitoring sensor 201 may include, for example, a plurality of cameras that capture the front, rear, right side, and left side of the vehicle.
- the peripheral monitoring sensor 201 may include a camera that photographs the inside of the vehicle 200.
- the vehicle sensor 202 is a sensor for detecting various states of the vehicle 200.
- the vehicle sensor 202 is, for example, a sensor such as a vehicle speed sensor that detects a vehicle speed, a steering sensor that detects a steering angle, an accelerator opening sensor that detects the opening degree of an accelerator pedal, and a brake pedal force sensor that detects the amount of depression of a brake pedal. including.
- the vehicle control ECU 203 is an electronic control device that controls the running of the vehicle 200.
- an electronic control device has a processor, a memory, an I / O (Input / Output), and a bus connecting them.
- the vehicle control ECU 203 Based on the sensor information output by the vehicle sensor 202, the vehicle control ECU 203 performs various controls such as control of the fuel injection amount, control of the engine ignition timing, and control of the assist amount of the power steering.
- the communication device 205 is configured as a device that performs wireless communication between the vehicle 200 and the network 150 (see FIG. 2).
- the communication device 205 includes a wireless communication antenna, a transmitter, and a receiver.
- the scene recognition unit 204 is a functional unit that transmits information specified in the parameter sheet to the server 110 when the vehicle 200 is in a situation corresponding to a specific scene identified by using the recognition model.
- the scene recognition unit 204 receives the recognition model and the parameter sheet from the server 20 through the communication device 205. Further, the scene recognition unit 204 transmits the information (data) specified in the parameter sheet to the server 20 through the communication device 205.
- the scene recognition unit 204 has a recognition model storage unit 241, a scene determination unit 242, a data transmission unit 243, and a recognition model update unit 244.
- the recognition model storage unit 241 stores one or more recognition models received from the server 110.
- the scene determination unit 242 acquires a recognition model from the recognition model storage unit 241. Further, the scene determination unit 242 acquires sensor information from the peripheral monitoring sensor 201 and the vehicle sensor 202. The scene determination unit 242 determines whether or not the vehicle 200 is in a situation corresponding to a specific scene based on the recognition model and the sensor information.
- the scene determination unit 242 acquires, for example, information acquired by using at least one of the camera included in the peripheral monitoring sensor 201 and the speed sensor and the acceleration sensor included in the vehicle sensor 202 as sensor information.
- the scene determination unit 242 inputs sensor information to, for example, a CNN constituting a recognition model.
- the recognition model outputs a determination result indicating whether or not the vehicle is in a situation corresponding to a specific scene.
- the scene determination unit 242 notifies the data transmission unit 243 to that effect.
- the scene determination unit 242 corresponds to the scene determination means 31 shown in FIG.
- the data transmission unit 243 transmits the data specified in the parameter sheet to the server 110.
- the data transmission unit 243 transmits, for example, the data specified in the parameter sheet among the data that can be acquired from the peripheral monitoring sensor 201, the vehicle sensor 202, and the vehicle control ECU 203 to the server 110.
- the data transmission unit 243 transmits the image of the camera included in the peripheral monitoring sensor 201 to the server 110.
- the data transmission unit 243 does not transmit the data specified in the parameter sheet.
- the data transmission unit 243 may always transmit specific information such as vehicle position information to the server 110.
- the data transmission unit 243 acquires the data specified in the parameter sheet from the peripheral monitoring sensor 201, the vehicle sensor 202, and the vehicle control ECU 203, regardless of the determination result of the scene determination unit 242, for example.
- the data transmission unit 243 transmits the acquired data to the server 110. If the scene determination unit 242 does not determine that the vehicle is in a situation corresponding to a specific scene, the data transmission unit 243 discards the acquired data.
- the data transmission unit 243 corresponds to the data transmission means 32 shown in FIG.
- the transmission of the parameter sheet from the server 110 to the vehicle 200 may be omitted.
- the data transmission unit 243 may transmit, for example, predetermined information to the server 110.
- the recognition model and the information to be transmitted when the recognition model is used are associated with each other.
- the data transmission unit 243 may transmit information corresponding to the recognition model used in the scene determination unit 242 to the server 110.
- the recognition model update unit (recognition model update means) 244 receives the recognition model from the server 110.
- the recognition model update unit 244 stores the received recognition model in the recognition model storage unit 241.
- the recognition model update unit 244 may update the recognition model stored in the recognition model storage unit 241 with the recognition model received from the server 110.
- the scene determination unit 242 determines whether or not the vehicle is in a situation corresponding to a specific scene by using the updated recognition model.
- FIG. 5 shows an operation procedure (information collecting method) in the server 110.
- the recognition model selection unit 111 (see FIG. 3) of the server 110 selects a recognition model to be transmitted to the vehicle 200 (step A1).
- the recognition model selection unit 111 selects, for example, a scene to be recognized in the vehicle 200 based on the position information of the vehicle 200, and selects a recognition model corresponding to the selected scene.
- the parameter determination unit 112 determines the parameter corresponding to the recognition model determined in step A1 as a parameter to be transmitted to the vehicle 200 (step A2).
- the transmission unit 113 transmits the recognition model selected in step A1 and the parameter sheet in which the parameters determined in step A2 are described to the vehicle 200 via the network 150 (see FIG. 2) (step A3). ..
- FIG. 6 shows an operation procedure (information transmission method) in the vehicle 200.
- the scene recognition unit 204 (see FIG. 4) of the vehicle 200 receives the recognition model and the parameter sheet from the server 110 via the network 150 (step B1).
- the scene recognition unit 204 may receive the recognition model and parameters while the vehicle is running.
- the recognition model update unit 244 stores the recognition model received in step B1 in the recognition model storage unit 241 (step B2). If the recognition model is already stored in the recognition model storage unit 241, the recognition model update unit 244 updates the recognition model stored in the recognition model storage unit 241 with the recognition model received in step B1.
- the scene determination unit 242 acquires sensor information from the peripheral monitoring sensor 201 of the vehicle 200 and the vehicle sensor 202.
- the scene determination unit 242 applies the acquired sensor information to the recognition model, and determines whether or not the vehicle is in a situation corresponding to a specific scene (step B3).
- the data transmission unit 243 determines in step B3 that the vehicle is in a situation corresponding to a specific scene, the data transmission unit 243 transmits the data specified in the parameter sheet to the server 110 via the network 150 (step B4).
- the data collection unit 114 of the server 110 receives the data transmitted from the vehicle 200 (step A4).
- the data collection unit 114 collects data from a plurality of vehicles 200 when it is determined that the situation corresponds to a specific scene. For example, the data collection unit 114 outputs the collected data to the analyzer 115 in association with the recognition model transmitted to the vehicle 200.
- the analyzer 115 accumulates data received from the vehicle 200, for example, for each scene recognized using the recognition model. The analyzer 115 analyzes the accumulated data.
- the server 110 transmits the recognition model and parameters to the vehicle 200.
- the vehicle 200 determines whether or not the vehicle is in a situation corresponding to a specific scene based on the received recognition model and the sensor information.
- the vehicle 200 transmits the information specified in the parameter to the server 110.
- the server 110 can specify a scene in which information is transmitted to the vehicle 200 and information to be transmitted through a recognition model and parameters to be transmitted to the vehicle 200. Therefore, the server 110 can collect the data to be collected when the vehicle 200 is in a situation corresponding to a specific scene.
- the server 110 can select a recognition model and parameters according to the data to be acquired. Assuming that the server 110 acquires all the data from the vehicle 200, the server 110 needs to receive a large amount of data from a large number of vehicles 200 and process a large amount of data. In the present embodiment, the server 110 can specify a situation in which data is transmitted and data to be acquired by using a recognition model and parameters. Therefore, in the present embodiment, the data collected from the vehicle 200 can be narrowed down, and the storage cost in the server 110 can be suppressed. Further, in the present embodiment, the amount of data transfer between the server 110 and the vehicle 200 can be reduced, and the communication cost can be suppressed.
- FIG. 7 shows a data collection system according to the second embodiment of the present disclosure.
- the data collection system 100a according to the present embodiment is different from the data collection system 100 according to the first embodiment shown in FIG. 2 in that it further includes a traffic information system 300.
- the configuration of the server 110 may be the same as the configuration of the server 110 in the first embodiment shown in FIG.
- the configuration of the vehicle 200 may be the same as the configuration of the vehicle 200 in the first embodiment shown in FIG.
- the traffic information system 300 is a system that provides regional characteristic information regarding traffic.
- the traffic information system 300 holds information in which a specific event related to traffic and a point where the event occurs frequently are associated with each other.
- the traffic information system 300 holds, for example, rear-end collision frequent occurrence point information 310 including information indicating a rear-end collision frequent occurrence point.
- the traffic information system 300 may further hold information indicating a point where road rage frequently occurs.
- the traffic information system 300 provides the server 110 with the rear-end collision frequent occurrence point information 310 to be held.
- the recognition model selection unit 111 of the server 110 refers to the information held by the traffic information system 300 and selects the recognition model.
- the recognition model selection unit 111 refers to, for example, the rear-end collision frequent occurrence point information 310, and determines whether or not the vehicle 200 exists at the rear-end collision frequent occurrence point.
- the server 110 determines that rear-end collisions occur frequently near the current location of the vehicle 200, the server 110 selects a recognition model for identifying the scene of the rear-end collision as a recognition model to be transmitted to the vehicle 200.
- the server 110 cooperates with the traffic information system 300 and selects a recognition model using the information held by the traffic information system 300.
- the server 110 transmits, for example, a recognition model for identifying a rear-end collision scene to the vehicle 200 when a rear-end collision occurs frequently near the current location of the vehicle 200 by selecting a recognition model using the rear-end collision frequent occurrence point information 310. can do.
- the recognition model for discriminating events such as accidents that frequently occur in the vicinity of the current location of the vehicle 200 can be arranged in the vehicle 200. Other effects are similar to those described in the first embodiment.
- the parameter sheet may include information indicating the priority of the data.
- Priority includes, for example, "high”, “medium”, and “low”.
- the data transmission unit 243 may monitor the communication band in the network 150 (see FIG. 2) and preferentially transmit data having a higher priority according to the communication band to the server 110.
- the data transmission unit 243 transmits data of all priorities to the server 110, for example, when the communication band is larger than the first threshold value.
- the data transmission unit 243 transmits the data whose priority is set to "high” or "medium” to the server 110, and the priority is set to "low”. You may discard the data.
- the data transmission unit 243 transmits the data whose priority is set to "high” to the server 110, and the data transmission unit 243 has a priority of "medium”. Or “Low” may be discarded.
- the parameter sheet may include information that specifies data that is always transmitted to the server 110, regardless of the determination result of the scene determination unit 242.
- the data transmission unit 243 determines important data in the analysis based on the purpose and scene of the analysis performed in the analyzer 115 (see FIG. 3), and even if the non-important data is discarded. good. For example, when the scene determination unit 242 determines a follow-up scene, the information obtained from the moving image data is small. If sensor data is sufficient in the analysis, the data transmission unit 243 may discard the moving image data on the vehicle side without transmitting it to the server 110.
- the server 110 may be configured as a computer device.
- FIG. 8 shows a configuration example of a computer device that can be used as the server 110.
- the computer device 500 includes a control unit (CPU: Central Processing Unit) 510, a storage unit 520, a ROM (ReadOnlyMemory) 530, a RAM (RandomAccessMemory) 540, a communication interface (IF: Interface) 550, and a user interface 560.
- CPU Central Processing Unit
- ROM ReadOnlyMemory
- RAM RandomAccessMemory
- IF Interface
- user interface 560 a user interface 560.
- the communication interface 550 is an interface for connecting the computer device 500 and the communication network via a wired communication means, a wireless communication means, or the like.
- the user interface 560 includes a display unit such as a display. Further, the user interface 560 includes an input unit such as a keyboard, a mouse, and a touch panel.
- the storage unit 520 is an auxiliary storage device that can hold various types of data.
- the storage unit 520 does not necessarily have to be a part of the computer device 500, and may be an external storage device or a cloud storage connected to the computer device 500 via a network.
- ROM 530 is a non-volatile storage device.
- a semiconductor storage device such as a flash memory having a relatively small capacity is used.
- the program executed by the CPU 510 may be stored in the storage unit 520 or the ROM 530.
- the storage unit 520 or ROM 530 stores, for example, various programs for realizing the functions of each unit in the server 110.
- RAM 540 is a volatile storage device.
- various semiconductor memory devices such as DRAM (Dynamic Random Access Memory) or SRAM (Static Random Access Memory) are used.
- the RAM 540 can be used as an internal buffer for temporarily storing data and the like.
- the CPU 510 expands the program stored in the storage unit 520 or the ROM 530 into the RAM 540 and executes the program. By executing the program by the CPU 510, the functions of each part in the server 110 can be realized.
- the CPU 510 may have an internal buffer that can temporarily store data and the like.
- the scene recognition unit 204 can be configured as an electronic control unit (electronic control device).
- FIG. 9 shows a hardware configuration example of an electronic control device that can be used for the scene recognition unit 204.
- the electronic control device 600 includes a processor 601, a ROM 602, and a RAM 603.
- the processors 601, ROM 602, and RAM 603 are connected to each other via the bus 604.
- the electronic control device 600 may include other circuits such as peripheral circuits, communication circuits, and interface circuits.
- ROM 602 is a non-volatile storage device.
- a semiconductor storage device such as a flash memory having a relatively small capacity is used.
- the ROM 602 stores a program executed by the processor 501.
- the RAM 603 is a volatile storage device.
- Various semiconductor memory devices such as DRAM or SRAM are used for the RAM 603.
- the RAM 640 can be used as an internal buffer for temporarily storing data and the like.
- the processor 601 expands the program stored in the ROM 602 into the RAM 603 and executes it. By executing the program by the CPU 601, the functions of each part of the scene recognition unit 204 can be realized.
- Non-temporary computer-readable media include various types of tangible storage media.
- Examples of non-temporary computer-readable media are, for example, flexible disks, magnetic tapes, or magnetic recording media such as hard disks, such as optical magnetic recording media such as optical magnetic disks, CDs (compact discs), or DVDs (digital versatile disks).
- Includes optical disk media such as, and semiconductor memory such as mask ROM, PROM (programmableROM), EPROM (erasablePROM), flash ROM, or RAM.
- the program may also be supplied to the computer using various types of temporary computer-readable media. Examples of temporary computer readable media include electrical, optical, and electromagnetic waves.
- the temporary computer-readable medium can supply the program to the computer via a wired communication path such as an electric wire and an optical fiber, or a wireless communication path.
- the server has the server and a vehicle connected via a network.
- the server A recognition model selection means for selecting a recognition model for identifying that the vehicle is in a situation corresponding to a specific scene based on sensor information.
- a transmission means for transmitting the recognition model to the vehicle, and It has a data collection means for collecting information transmitted from the vehicle, and has The vehicle A scene determination means for determining whether or not the vehicle is in a situation corresponding to a specific scene based on the recognition model received from the server and the sensor information.
- An information collection system comprising a data transmission means for transmitting information to the server when it is determined that the vehicle is in a situation corresponding to a specific scene.
- the server further has parameter determining means for determining parameters that specify information acquired from the vehicle, based on the selected recognition model.
- the transmitting means further transmits the determined parameter to the vehicle.
- the information collection system according to Appendix 1, wherein the data transmission means transmits information specified in the parameters received from the server to the server when it is determined that the vehicle is in a situation corresponding to a specific scene.
- Appendix 3 The information collection system according to Appendix 1 or 2, wherein the recognition model selection means selects a recognition model to be transmitted to the vehicle based on the position information of the vehicle.
- the recognition model selection means acquires information indicating a specific scene associated with the position of the vehicle from the regional characteristic information associated with the geographical position and the specific scene, and the acquired information is obtained.
- the sensor information includes at least one of information acquired by using a camera mounted on the vehicle, information acquired by using a speed sensor, and information acquired by using an acceleration sensor.
- a recognition model selection means that selects a recognition model for identifying a vehicle in a situation corresponding to a specific scene based on sensor information.
- a transmission means for transmitting the recognition model to the vehicle via a network,
- a server including a data collecting means for collecting information from the vehicle when it is determined in the vehicle that the vehicle is in a situation corresponding to a specific scene based on the recognition model and sensor information.
- Appendix 11 Further having a parameter determining means for determining a parameter that specifies information to be transmitted to the vehicle when it is determined in the vehicle that the vehicle is in a situation corresponding to a particular scene based on the selected recognition model.
- Appendix 12 The server according to Appendix 10 or 11, wherein the recognition model selection means selects a recognition model to be transmitted to the vehicle based on the position information of the vehicle.
- the recognition model selection means acquires information indicating a specific scene associated with the position of the vehicle from the regional characteristic information associated with the geographical position and the specific scene, and the acquired information is obtained.
- the data transmission means transmits the information specified in the parameter for specifying the information to be transmitted to the server received from the server to the server.
- the sensor information includes at least one of information acquired by using a camera mounted on the vehicle, information acquired by using a speed sensor, and information acquired by using an acceleration sensor. Vehicles listed in.
- [Appendix 20] Select a recognition model to identify that the vehicle is in a situation corresponding to a particular scene based on sensor information, The recognition model is transmitted to the vehicle via the network.
- [Appendix 22] Select a recognition model to identify that the vehicle is in a situation corresponding to a particular scene based on sensor information, The recognition model is transmitted to the vehicle via the network.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
サーバと、
前記サーバとネットワークを介して接続される車両とを有し、
前記サーバは、
センサ情報に基づいて前記車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択する認識モデル選択手段と、
前記認識モデルを前記車両に送信する送信手段と、
前記車両から送信される情報を収集するデータ収集手段とを有し、
前記車両は、
前記サーバから受信した認識モデルと、センサ情報とに基づいて、車両が特定のシーンに対応した状況にあるか否かを判定するシーン判定手段と、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を前記サーバに送信するデータ送信手段とを有する、情報収集システム。
前記サーバは、前記選択された認識モデルに基づいて、前記車両から取得する情報を指定するパラメータを決定するパラメータ決定手段を更に有し、
前記送信手段は、前記決定されたパラメータを前記車両に更に送信し、
データ送信手段は、前記車両が特定のシーンに対応した状況にあると判定された場合、前記サーバから受信したパラメータにおいて指定された情報を前記サーバに送信する付記1に記載の情報収集システム。
前記認識モデル選択手段は、前記車両の位置情報に基づいて、前記車両に送信する認識モデルを選択する付記1又は2に記載の情報収集システム。
前記認識モデル選択手段は、地理的な位置と、前記特定のシーンとを対応付けた地域特性情報から、前記車両の位置に対応付けられる特定のシーンを示す情報を取得し、該取得した情報が示す特定のシーンを識別するための認識モデルを前記車両に送信する認識モデルとして選択する付記3に記載の情報収集システム。
前記認識モデル選択手段は、前記車両が走行している道路の種別に応じて、前記認識モデルを選択する付記1から4何れか1つに記載の情報収集システム。
前記センサ情報は、前記車両に搭載されるカメラを用いて取得される情報、速度センサを用いて取得される情報、及び加速度センサを用いて取得される情報の少なくとも1つを含む付記1から5何れか1つに記載の情報収集システム。
前記認識モデルはCNN(Convolutional Neural Network)を含む付記1から6何れか1つに記載の情報収集システム。
前記車両は、前記サーバから前記認識モデルを受信し、前記シーン判定手段が使用する認識モデルを、前記受信した認識モデルで更新する認識モデル更新手段を更に有する付記1から7何れか1つに記載の情報収集システム。
前記サーバは、前記データ収集手段が収集した情報を用いて分析を行う分析装置を更に有する付記1から8何れか1つに記載の情報収集システム。
センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択する認識モデル選択手段と、
前記認識モデルを、ネットワークを介して前記車両に送信する送信手段と、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集するデータ収集手段とを備えるサーバ。
前記選択された認識モデルに基づいて、前記車両において前記車両が特定のシーンに対応した状況にあると判定された場合に前記車両に送信させる情報を指定するパラメータを決定するパラメータ決定手段を更に有する付記10に記載のサーバ。
前記認識モデル選択手段は、前記車両の位置情報に基づいて、前記車両に送信する認識モデルを選択する付記10又は11に記載のサーバ。
前記認識モデル選択手段は、地理的な位置と、前記特定のシーンとを対応付けた地域特性情報から、前記車両の位置に対応付けられる特定のシーンを示す情報を取得し、該取得した情報が示す特定のシーンを識別するための認識モデルを前記車両に送信する認識モデルとして選択する付記12に記載のサーバ。
前記認識モデル選択手段は、前記車両が走行している道路の種別に応じて、前記認識モデルを選択する付記10から13何れか1つに記載のサーバ。
前記データ収集手段が収集した情報を用いて分析を行う分析装置を更に有する付記10から14何れか1つに記載のサーバ。
車両であって、
サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定するシーン判定手段と、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信するデータ送信手段とを備える車両。
前記データ送信手段は、前記車両が特定のシーンに対応した状況にあると判定された場合、前記サーバから受信した、前記サーバに送信する情報を指定するパラメータにおいて指定された情報を前記サーバに送信する付記16に記載の車両。
前記センサ情報は、前記車両に搭載されるカメラを用いて取得される情報、速度センサを用いて取得される情報、及び加速度センサを用いて取得される情報の少なくとも1つを含む付記16又は17に記載の車両。
前記車両は、前記サーバから前記認識モデルを受信し、前記シーン判定手段が使用する認識モデルを、前記受信した認識モデルで更新する認識モデル更新手段を更に有する付記16から18何れか1つに記載の車両。
センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択し、
前記認識モデルを、ネットワークを介して前記車両に送信し、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集する情報収集方法。
サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定し、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信する情報送信方法。
センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択し、
前記認識モデルを、ネットワークを介して前記車両に送信し、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集する処理をコンピュータに実行させるためのプログラムを格納する非一時的なコンピュータ可読媒体。
サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定し、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信する処理をプロセッサに実行させるためのプログラムを格納する非一時的なコンピュータ可読媒体。
20:サーバ
21:認識モデル選択手段
22:送信手段
23:データ収集手段
30:車両
31:シーン判定手段
32:データ送信手段
100:データ収集システム
110:サーバ
111:認識モデル選択部
112:パラメータ決定部
113:送信部
114:データ収集部
115:分析装置
150:ネットワーク
200:車両
201:周辺監視センサ
202:車両センサ
203:車両制御ECU
204:シーン認識ユニット
205:通信装置
241:認識モデル記憶部
242:シーン判定部
243:データ送信部
244:認識モデル更新部
300:交通情報システム
310:追突事故多発地点情報
Claims (23)
- サーバと、
前記サーバとネットワークを介して接続される車両とを有し、
前記サーバは、
センサ情報に基づいて前記車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択する認識モデル選択手段と、
前記認識モデルを前記車両に送信する送信手段と、
前記車両から送信される情報を収集するデータ収集手段とを有し、
前記車両は、
前記サーバから受信した認識モデルと、センサ情報とに基づいて、車両が特定のシーンに対応した状況にあるか否かを判定するシーン判定手段と、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を前記サーバに送信するデータ送信手段とを有する、情報収集システム。 - 前記サーバは、前記選択された認識モデルに基づいて、前記車両から取得する情報を指定するパラメータを決定するパラメータ決定手段を更に有し、
前記送信手段は、前記決定されたパラメータを前記車両に更に送信し、
データ送信手段は、前記車両が特定のシーンに対応した状況にあると判定された場合、前記サーバから受信したパラメータにおいて指定された情報を前記サーバに送信する請求項1に記載の情報収集システム。 - 前記認識モデル選択手段は、前記車両の位置情報に基づいて、前記車両に送信する認識モデルを選択する請求項1又は2に記載の情報収集システム。
- 前記認識モデル選択手段は、地理的な位置と、前記特定のシーンとを対応付けた地域特性情報から、前記車両の位置に対応付けられる特定のシーンを示す情報を取得し、該取得した情報が示す特定のシーンを識別するための認識モデルを前記車両に送信する認識モデルとして選択する請求項3に記載の情報収集システム。
- 前記認識モデル選択手段は、前記車両が走行している道路の種別に応じて、前記認識モデルを選択する請求項1から4何れか1項に記載の情報収集システム。
- 前記センサ情報は、前記車両に搭載されるカメラを用いて取得される情報、速度センサを用いて取得される情報、及び加速度センサを用いて取得される情報の少なくとも1つを含む請求項1から5何れか1項に記載の情報収集システム。
- 前記認識モデルはCNN(Convolutional Neural Network)を含む請求項1から6何れか1項に記載の情報収集システム。
- 前記車両は、前記サーバから前記認識モデルを受信し、前記シーン判定手段が使用する認識モデルを、前記受信した認識モデルで更新する認識モデル更新手段を更に有する請求項1から7何れか1項に記載の情報収集システム。
- 前記サーバは、前記データ収集手段が収集した情報を用いて分析を行う分析装置を更に有する請求項1から8何れか1項に記載の情報収集システム。
- センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択する認識モデル選択手段と、
前記認識モデルを、ネットワークを介して前記車両に送信する送信手段と、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集するデータ収集手段とを備えるサーバ。 - 前記選択された認識モデルに基づいて、前記車両において前記車両が特定のシーンに対応した状況にあると判定された場合に前記車両に送信させる情報を指定するパラメータを決定するパラメータ決定手段を更に有する請求項10に記載のサーバ。
- 前記認識モデル選択手段は、前記車両の位置情報に基づいて、前記車両に送信する認識モデルを選択する請求項10又は11に記載のサーバ。
- 前記認識モデル選択手段は、地理的な位置と、前記特定のシーンとを対応付けた地域特性情報から、前記車両の位置に対応付けられる特定のシーンを示す情報を取得し、該取得した情報が示す特定のシーンを識別するための認識モデルを前記車両に送信する認識モデルとして選択する請求項12に記載のサーバ。
- 前記認識モデル選択手段は、前記車両が走行している道路の種別に応じて、前記認識モデルを選択する請求項10から13何れか1項に記載のサーバ。
- 前記データ収集手段が収集した情報を用いて分析を行う分析装置を更に有する請求項10から14何れか1項に記載のサーバ。
- 車両であって、
サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定するシーン判定手段と、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信するデータ送信手段とを備える車両。 - 前記データ送信手段は、前記車両が特定のシーンに対応した状況にあると判定された場合、前記サーバから受信した、前記サーバに送信する情報を指定するパラメータにおいて指定された情報を前記サーバに送信する請求項16に記載の車両。
- 前記センサ情報は、前記車両に搭載されるカメラを用いて取得される情報、速度センサを用いて取得される情報、及び加速度センサを用いて取得される情報の少なくとも1つを含む請求項16又は17に記載の車両。
- 前記車両は、前記サーバから前記認識モデルを受信し、前記シーン判定手段が使用する認識モデルを、前記受信した認識モデルで更新する認識モデル更新手段を更に有する請求項16から18何れか1項に記載の車両。
- センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択し、
前記認識モデルを、ネットワークを介して前記車両に送信し、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集する情報収集方法。 - サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定し、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信する情報送信方法。 - センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルを選択し、
前記認識モデルを、ネットワークを介して前記車両に送信し、
前記車両において前記認識モデルとセンサ情報とに基づいて前記車両が特定のシーンに対応した状況にあると判定された場合に、前記車両から情報を収集する処理をコンピュータに実行させるためのプログラムを格納する非一時的なコンピュータ可読媒体。 - サーバからネットワークを介して受信した、センサ情報に基づいて車両が特定のシーンに対応した状況にあることを識別するための認識モデルと、センサ情報とに基づいて、前記車両が特定のシーンに対応した状況にあるか否かを判定し、
前記車両が特定のシーンに対応した状況にあると判定された場合、情報を、ネットワークを介して前記サーバに送信する処理をプロセッサに実行させるためのプログラムを格納する非一時的なコンピュータ可読媒体。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2020/044197 WO2022113261A1 (ja) | 2020-11-27 | 2020-11-27 | 情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 |
JP2022564926A JPWO2022113261A5 (ja) | 2020-11-27 | 情報収集システム、サーバ、車両、方法、及びプログラム | |
US18/037,290 US20240005672A1 (en) | 2020-11-27 | 2020-11-27 | Information collection system, server, and information collection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2020/044197 WO2022113261A1 (ja) | 2020-11-27 | 2020-11-27 | 情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022113261A1 true WO2022113261A1 (ja) | 2022-06-02 |
Family
ID=81755418
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/044197 WO2022113261A1 (ja) | 2020-11-27 | 2020-11-27 | 情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 |
Country Status (2)
Country | Link |
---|---|
US (1) | US20240005672A1 (ja) |
WO (1) | WO2022113261A1 (ja) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018136754A (ja) * | 2017-02-22 | 2018-08-30 | 株式会社日立製作所 | 情報処理装置、モビリティデータ収集システム |
US20190265712A1 (en) * | 2018-02-27 | 2019-08-29 | Nauto, Inc. | Method for determining driving policy |
-
2020
- 2020-11-27 US US18/037,290 patent/US20240005672A1/en active Pending
- 2020-11-27 WO PCT/JP2020/044197 patent/WO2022113261A1/ja active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018136754A (ja) * | 2017-02-22 | 2018-08-30 | 株式会社日立製作所 | 情報処理装置、モビリティデータ収集システム |
US20190265712A1 (en) * | 2018-02-27 | 2019-08-29 | Nauto, Inc. | Method for determining driving policy |
Also Published As
Publication number | Publication date |
---|---|
JPWO2022113261A1 (ja) | 2022-06-02 |
US20240005672A1 (en) | 2024-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11520331B2 (en) | Methods and apparatus to update autonomous vehicle perspectives | |
US9849865B2 (en) | Emergency braking system and method of controlling the same | |
JP2016095831A (ja) | 運転支援システム及びセンタ | |
US11631326B2 (en) | Information providing system, server, onboard device, vehicle, storage medium, and information providing method | |
US10755565B2 (en) | Prioritized vehicle messaging | |
US11202030B2 (en) | System and method for providing complete event data from cross-referenced data memories | |
US20210004612A1 (en) | Road environment monitoring device, road environment monitoring system, and road environment monitoring program | |
US20220343637A1 (en) | Traffic flow machine-learning modeling system and method applied to vehicles | |
JP2019175089A (ja) | センサ提供システム、車載装置、センサ共有サーバ、及びコンピュータプログラム | |
JP7153759B2 (ja) | 音声ノイズ除去方法及びシステム | |
JP2014137682A (ja) | 移動体端末の位置情報を用いた交通情報提供システム | |
WO2022113261A1 (ja) | 情報収集システム、サーバ、車両、方法、及びコンピュータ可読媒体 | |
JP6989347B2 (ja) | 車両状態判定装置、車両状態判定システム、車両状態判定方法、及び、車両状態判定プログラム | |
US20230256994A1 (en) | Assessing relative autonomous vehicle performance via evaluation of other road users | |
CN112687105B (zh) | 一种车辆鸣笛检测系统和方法 | |
US20210157333A1 (en) | Automatic driving control system, server device, and recording medium | |
CN114495505A (zh) | 拥堵路段通过时长的预测方法、装置、介质及服务器 | |
JP2022002118A (ja) | 情報処理装置、サーバ装置、情報処理方法、及びプログラム | |
WO2019116537A1 (ja) | 情報取得方法、情報取得装置、車載装置、路上センサ装置、及び情報取得システム | |
US20230392940A1 (en) | Systems and methods for generating fuel efficiency score based on cell phone sensor data | |
JP7276195B2 (ja) | サーバ、プログラム、及び情報処理方法 | |
US11467097B2 (en) | Road management device | |
US20230339517A1 (en) | Autonomous driving evaluation system | |
US12017649B2 (en) | Blockchain system to aid vehicle actions | |
JP7261892B2 (ja) | 占有格子地図管理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20963525 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18037290 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2022564926 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20963525 Country of ref document: EP Kind code of ref document: A1 |