WO2023286866A1 - Calculation device, calculation method, and program - Google Patents

Calculation device, calculation method, and program Download PDF

Info

Publication number
WO2023286866A1
WO2023286866A1 PCT/JP2022/027926 JP2022027926W WO2023286866A1 WO 2023286866 A1 WO2023286866 A1 WO 2023286866A1 JP 2022027926 W JP2022027926 W JP 2022027926W WO 2023286866 A1 WO2023286866 A1 WO 2023286866A1
Authority
WO
WIPO (PCT)
Prior art keywords
behavior
behavior information
vehicle
detected
model
Prior art date
Application number
PCT/JP2022/027926
Other languages
French (fr)
Japanese (ja)
Inventor
伸一 ▲高▼松
悠 首藤
Original Assignee
Kyb株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kyb株式会社 filed Critical Kyb株式会社
Priority to JP2022562408A priority Critical patent/JP7232967B1/en
Publication of WO2023286866A1 publication Critical patent/WO2023286866A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles

Definitions

  • the present invention relates to an arithmetic device, an arithmetic method and a program.
  • Patent Document 1 there is known a technique for estimating road surface conditions by detecting the acceleration of a vehicle traveling on a road and inputting the acceleration data into a learning model.
  • an arithmetic device acquires behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road.
  • an acquisition unit the behavior information detected at a predetermined position, and a first reference that is behavior of the vehicle detected in a first region overlapping the predetermined position before the behavior information detected at the predetermined position.
  • a first determination unit that determines whether vehicle behavior indicated by the behavior information detected at the predetermined position corresponds to vehicle behavior indicated by the first reference behavior information, based on the behavior information; When it is determined that the behavior of the vehicle indicated by the behavior information detected in the above does not correspond to the behavior of the vehicle indicated by the first reference behavior information, the entire area of the first area is included, and more than the first area.
  • the a second judgment unit for judging whether the behavior of the vehicle indicated by the behavior information detected in the second area corresponds to the behavior of the vehicle indicated by the second reference behavior information; a learning unit that causes the AI model to perform machine learning of a correspondence relationship with a road surface state, wherein the learning unit is configured such that the behavior of the vehicle indicated by the behavior information detected in the second area and the second reference behavior information are If it is determined that the behavior of the vehicle shown does not correspond, the AI model is updated using the behavior information detected in the second area as teacher data.
  • a calculation method includes a step of obtaining behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road; , the behavior information detected at a predetermined position; and first reference behavior information, which is behavior of the vehicle detected in a first region overlapping the predetermined position before the behavior information detected at the predetermined position.
  • the AI model in the step of causing the AI model to perform machine learning, when it is determined that the behavior of the vehicle indicated by the behavior information detected in the second region does not correspond to the behavior of the vehicle indicated by the second reference behavior information. updates the AI model using the behavior information detected in the second area as teacher data.
  • a program acquires behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road; the behavior information detected at a predetermined position; and first reference behavior information, which is vehicle behavior detected in a first region overlapping the predetermined position, before the behavior information detected at the predetermined position. determining whether the vehicle behavior indicated by the behavior information detected at the predetermined position and the vehicle behavior indicated by the first reference behavior information correspond to each other; and the behavior information detected at the predetermined position.
  • a computer executes a step of determining whether or not the behavior of the vehicle indicated by the second reference behavior information corresponds, and a step of allowing an AI model to perform machine learning of the correspondence relationship between the behavior of the vehicle and the surface condition of the road.
  • the AI model is updated using the behavior information detected in the second area as teacher data.
  • FIG. 1 is a schematic block diagram of a detection system according to this embodiment.
  • FIG. 2 is a schematic diagram of a vehicle.
  • FIG. 3 is a schematic block diagram of an arithmetic device.
  • FIG. 4 is a schematic diagram showing an example of types of the first AI model.
  • FIG. 5A is a schematic diagram showing an example of types of the second AI model.
  • FIG. 5B is a schematic diagram illustrating an example of the first area and the second area.
  • FIG. 6 is a flowchart for explaining the processing flow of the arithmetic device.
  • FIG. 1 is a schematic block diagram of a detection system according to this embodiment.
  • the detection system 1 includes a vehicle 10 , a measurement data acquisition device 12 and an arithmetic device 14 .
  • the detection system 1 is a system that determines the road surface condition based on the behavior information of the vehicle 10 traveling on the road. Behavior information is information that indicates the behavior of the vehicle 10 traveling on a road, and will be described later in detail.
  • the detection system 1 estimates the road surface condition by calculating the road surface condition based on the behavior information using the arithmetic unit 14 .
  • the road surface condition is an index indicating the degree of unevenness of the road surface in this embodiment.
  • the road surface condition is the IRI (International Roughness Index).
  • the road surface condition is not limited to the IRI, and may be any index indicating the road surface condition.
  • the road surface condition may be at least one of IRI, road surface flatness, cracks, rutting, and MCI (Maintenance Control Index).
  • the vehicle 10 detects behavior information and position information while traveling on a road, and transmits the detected behavior information and position information to the measurement data acquisition device 12 .
  • the measurement data acquisition device 12 is, for example, a device (computer) managed by an entity that manages roads.
  • the measurement data acquisition device 12 transmits the behavior information and position information transmitted from the vehicle 10 to the arithmetic device 14 .
  • the calculation device 14 estimates the road surface condition based on the behavior information and position information transmitted from the measurement data acquisition device 12 .
  • the calculation device 14 then transmits the estimation result of the road surface condition to the measurement data acquisition device 12 .
  • the computing device 14 acquires behavior information and position information via the measurement data acquisition device 12, but is not limited to this.
  • the detection system 1 may not be provided with the measurement data acquisition device 12 and the arithmetic device 14 may directly acquire behavior information and position information from the vehicle 10 .
  • FIG. 2 is a schematic diagram of a vehicle.
  • the vehicle 10 includes a position sensor 10A, a behavior sensor 10B, and a measuring device 10C.
  • the position sensor 10A is a sensor that acquires position information of the vehicle 10 .
  • the positional information of the vehicle 10 is information indicating the earth coordinates of the vehicle 10 .
  • Position sensor 10A is a module for GNSS (Global Navigation Satellite System) in this embodiment. Note that the Z direction in FIG. 2 indicates upward in the vertical direction, and FIG. 2 can be said to be a schematic diagram when the vehicle 10 is viewed from above in the vertical direction.
  • GNSS Global Navigation Satellite System
  • the behavior sensor 10B is a sensor that detects behavior information that indicates the behavior of the vehicle 10.
  • the behavior information may be any information as long as it indicates the behavior of the vehicle 10 traveling on the road.
  • the behavior sensor 10B preferably detects the acceleration of the vehicle 10 as behavior information.
  • the behavior sensor 10B is an acceleration sensor that detects acceleration, more preferably an acceleration sensor that detects acceleration on three axes.
  • the behavior information detected by the behavior sensor 10B is not limited to acceleration. It may be at least one of an amount of braking of the vehicle 10, an operation amount of the wipers of the vehicle 10, and an operation amount of the suspension of the vehicle 10.
  • the behavior sensor 10B that detects the captured image around the vehicle 10 is, for example, a camera.
  • the behavior sensor 10B that detects the speed of the vehicle 10 is, for example, a speed sensor.
  • the behavior sensor 10B that is a gyro sensor and detects the steering angle of the vehicle 10 is, for example, a steering sensor, and the behavior sensor 10B that detects the braking amount of the vehicle 10 is, for example, a brake sensor that detects the operation of the wipers of the vehicle 10.
  • An example of the behavior sensor 10B is a wiper sensor, and an example of the behavior sensor 10B that detects the amount of operation of the suspension of the vehicle 10 is a suspension sensor.
  • the vehicle 10 is equipped with a plurality of behavior sensors 10B.
  • Each behavior sensor 10B is mounted at a position different from each other in the vehicle 10 .
  • a behavior sensor 10B1 is provided on the Z direction side (vertical direction upward side) of the left front wheel tire TR1
  • a behavior sensor 10B1 is provided on the Z direction side of the right front wheel tire TR2.
  • a behavior sensor 10B3 provided on the Z direction side of the left rear wheel tire TR3; and a behavior sensor 10B4 provided on the Z direction side of the right rear wheel tire TR4.
  • the position where the behavior sensor 10B is provided is arbitrary.
  • the number of behavior sensors 10B is not limited to four, and may be any number.
  • the behavior sensors 10B1 to 10B4 detect the same type of behavior information (here, acceleration), but each behavior sensor 10B detects different types of behavior information. It's okay.
  • multiple behavior sensors 10B for example, multiple acceleration sensors
  • behavior sensors 10B for example, speed sensors
  • behavior information detected by the behavior sensors 10B1, 10B2, 10B3, and 10B4 is hereinafter referred to as behavior information D1, D2, D3, and D4 as appropriate.
  • the measuring device 10C is a device that controls the position sensor 10A and the behavior sensor 10B to detect position information and behavior information of the vehicle 10, and records the detected position information and behavior information. That is, the measuring device 10C functions as a data logger that records position information and behavior information of the vehicle 10 .
  • the measuring device 10C can also be said to be a computer, and includes a control section 10C1, a storage section 10C2, and a communication section 10C3.
  • the control unit 10C1 is an arithmetic unit, that is, a CPU (Central Processing Unit).
  • the storage unit 10C2 is a memory that stores various types of information such as the calculation contents and programs of the control unit 10C1, the position information and behavior information of the vehicle 10, and includes, for example, a RAM (Random Access Memory) and a ROM (Read Only Memory). and at least one of non-volatile storage devices such as flash memory and HDD (Hard Disk Drive). Note that the program for the control unit 10C1 stored in the storage unit 10C2 may be stored in a recording medium readable by the measuring device 10C.
  • the communication unit 10C3 is a communication module that communicates with an external device, such as an antenna.
  • the control unit 10C1 reads the program stored in the storage unit 10C2 and controls the position sensor 10A and the behavior sensor 10B.
  • Control unit 10C1 causes behavior sensor 10B to detect behavior information of vehicle 10 at predetermined time intervals while vehicle 10 is traveling on the road.
  • the control unit 10C1 causes the position sensor 10A to detect the position information of the vehicle 10 at the timing when the behavior sensor 10B detects the behavior information. That is, control unit 10C1 causes behavior sensor 10B to detect behavior information of vehicle 10 and causes position sensor 10A to detect position information of vehicle 10 each time vehicle 10 travels for a predetermined period of time.
  • the predetermined time here is preferably a fixed time such as one minute, but the predetermined time is not limited to a fixed time and may be any length. That is, the predetermined time may change each time.
  • the control unit 10C1 transmits the behavior information and the position information to the measurement data acquisition device 12 via the communication unit 10C3.
  • the measurement data acquisition device 12 transmits the behavior information and position information received from the vehicle 10 to the arithmetic device 14 . If the measurement data acquisition device 12 is not provided, the control section 10C1 may directly transmit the behavior information and the position information to the arithmetic device 14. FIG.
  • FIG. 3 is a schematic block diagram of an arithmetic unit.
  • the computing device 14 is, for example, a computer, and includes a communication section 20, a storage section 22, and a control section 24.
  • the communication unit 20 is a communication module that communicates with an external device, such as an antenna.
  • the storage unit 22 is a memory that stores various information such as calculation contents and programs of the control unit 24, a first AI model M and a second AI model N, which will be described later. At least one of non-volatile storage devices such as flash memory and HDD is included. Note that the program for the control unit 24 and the first AI model M and the second AI model N stored in the storage unit 22 may be stored in a recording medium readable by the computing device 14 .
  • the control unit 24 is an arithmetic device and includes an arithmetic circuit such as a CPU (Central Processing Unit).
  • Control unit 24 includes learning unit 30 , behavior information acquisition unit 32 , first determination unit 34 , second determination unit 36 , and calculation unit 38 .
  • the control unit 24 implements the learning unit 30, the behavior information acquisition unit 32, the first determination unit 34, the second determination unit 36, and the calculation unit 38 by reading and executing a program (software) from the storage unit 22. to perform those operations.
  • the control unit 24 may execute these processes by one CPU, or may be provided with a plurality of CPUs and may execute the processes by the plurality of CPUs.
  • at least a part of the learning unit 30, the behavior information acquisition unit 32, the first determination unit 34, the second determination unit 36, and the calculation unit 38 may be realized by hardware.
  • the learning unit 30 generates a learned first AI model M by subjecting the untrained first AI model M to machine learning.
  • the learning unit 30 performs machine learning on the first AI model M so that information indicating whether the behavior information is normal is output from the first AI model M.
  • the first AI model M is an AI (Artificial Intelligence) model that detects anomalies in behavior information.
  • Any model may be used as the first AI model M.
  • a model that performs logistic regression analysis, a CNN (Conventional Neural Network) model, or the like may be used.
  • the behavior of the vehicle 10 indicated by the behavior information corresponds to the behavior of the vehicle 10 indicated by the reference behavior information, which is the behavior information of the vehicle 10 detected before the behavior information.
  • the behavior information is normal means that the behavior of the vehicle 10 indicated by the behavior information is of the same quality as the behavior of the vehicle 10 indicated by the reference behavior information. means that the behavior of the vehicle 10 is similar to that indicated by the reference behavior information.
  • behavior information is not normal (abnormal) means that the behavior information does not correspond to the reference behavior information, in other words, that the behavior information is different from the reference behavior information. , for example, means that the behavior information is not similar to the reference behavior information.
  • the reference behavior information is data (acceleration in this embodiment) detected by causing the vehicle 10 to travel on the same road before detecting the behavior information and estimating the road surface condition. It can be said that it is the initial behavior information before deterioration.
  • the learning unit 30 sets a data set of reference behavior information and a label indicating that the reference behavior information is normal as teacher data. That is, the learning unit 30 sets normal data as teacher data.
  • the learning unit 30 prepares a plurality of data sets of reference behavior information and labels as teacher data.
  • the learning unit 30 inputs a data set (teacher data) of reference behavior information and labels to the unlearned first AI model M, and when behavior information is input, information as to whether or not the behavior information is normal. machine learning of the first AI model M so as to be able to output (to be able to label the behavior information as to whether it is normal or not).
  • FIG. 4 is a schematic diagram showing an example of the types of the first AI model.
  • the learning unit 30 prepares the first AI model M for each combination of the unit area, the motion state range, and the behavior sensor 10B that detected the behavior information.
  • a unit area refers to one divided area when the area in which the vehicle 10 travels is divided into predetermined areas.
  • the motion state range refers to the range of the motion state of the vehicle, and in this embodiment refers to the speed range of the vehicle.
  • the range between the assumed upper limit value and lower limit value of the vehicle speed may be divided into a plurality of speed ranges, and each divided speed range may be set as the exercise state range.
  • the learning unit 30 machine-learns the first AI model M corresponding to the combination of the predetermined behavior sensor 10B, the predetermined unit area, and the predetermined motion state range, the vehicle position is within the predetermined unit area.
  • the first AI model M to learn For example, in the example of FIG. 4, the first AI model M1a can be said to be a model corresponding to a combination of the unit area UR1, the exercise state range C1, and the behavior sensor 10B1 (input data is the behavior information D1).
  • the learning unit 30 determines that the position of the vehicle when the reference behavior information is detected is within the unit region UR1, the vehicle speed is within the motion state range C1 when the reference behavior information is detected, and , the reference behavior information detected by the behavior sensor 10B1 is extracted.
  • the learning unit 30 machine-learns the first AI model M1a using the extracted reference behavior information as teacher data for the first AI model M1a.
  • a first AI model M1b corresponding to the combination of the unit region UR1, the motion state range C2, and the behavior information D1
  • a first AI model M1b corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D2
  • a first AI model M2b corresponding to the combination of the model M2a, the unit region UR1, the motion state range C2, and the behavior information D2
  • a first AI model M3a corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D3.
  • a first AI model M3b corresponding to the combination of the unit region UR1, the motion state range C2, and the behavior information D3
  • a first AI model M4a corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D4
  • a first AI model M4b corresponding to the combination of the unit area UR1, the motion state range C2, and the behavior information D4 is illustrated.
  • the learning unit 30 causes the unlearned second AI model N to machine-learn the correspondence relationship between the behavior of the vehicle and the road surface condition, thereby learning the correspondence relationship between the behavior of the vehicle and the road surface condition.
  • Generate 2AI model N The learning unit 30 may perform machine learning of the correspondence relationship between the vehicle behavior detection results obtained by one behavior sensor 10B and the road surface conditions, or may perform machine learning of the vehicle behavior detection results obtained by a plurality of behavior sensors 10B and the road surface conditions.
  • the correspondence may be machine-learned.
  • the second AI model N may be any model capable of outputting road surface conditions when behavior information is input, but may be, for example, a CNN model.
  • the learning unit 30 sets a data set of the reference behavior information and the road surface condition at the position where the reference behavior information is detected as teacher data. A value measured in advance may be used for the road surface condition at the position where the reference behavior information is detected.
  • the learning unit 30 sets a plurality of data sets of reference behavior information and road surface conditions as teacher data.
  • the learning unit 30 inputs the data set of the reference behavior information and the road surface condition set as teacher data to the unlearned second AI model N, and transfers the correspondence relationship between the behavior information and the road surface condition to the second AI model N. machine learning.
  • FIG. 5A is a schematic diagram showing an example of types of the second AI model.
  • the learning unit 30 prepares a plurality of second AI models N in which at least some of the behavior sensors 10B that detect behavior information as input data are different. That is, each second AI model N differs in at least part of the behavior information input as input data.
  • D3 are input
  • the second AI model N3 is input with behavior information D1, D2, and D4 detected by the behavior sensors 10B1, 10B2, and 10B4, and the behavior detected by the behavior sensors 10B1, 10B3, and 10B4.
  • a second AI model N4 to which information D1, D3, and D4 are input a second AI model N5 to which behavior information D2, D3, and D4 detected by the behavior sensors 10B2, 10B3, and 10B4 are input, and the behavior detected by the behavior sensor 10B4. and a second AI model Nn to which information D4 is input.
  • the learning unit 30 uses the reference behavior information detected by the predetermined behavior sensor 10B as teacher data to generate the second AI model. Let N be machine-learned. In other words, for example, the learning unit 30 acquires the reference behavior information detected by the behavior sensors 10B1, 10B2, 10B3, and 10B4 and the road surface conditions at the positions where the reference behavior information is detected as a teacher of the second AI model N1. As data, the second AI model N1 is machine-learned.
  • the behavior information acquisition unit 32 acquires behavior information detected by the behavior sensor 10B. More specifically, the behavior information acquisition unit 32 acquires behavior information and position information indicating the position of the vehicle 10 when the behavior information is detected. The behavior information acquisition unit 32 acquires behavior information and position information from the measurement data acquisition device 12 via the communication unit 20 , but may also acquire the behavior information and the position information directly from the vehicle 10 .
  • the first determination unit 34 determines behavior information detected at a predetermined position and reference behavior information detected in a first region overlapping the predetermined position where the behavior information is detected (more than the behavior information detected at the predetermined position). First reference behavior information previously detected in the first area), whether the behavior information detected at the predetermined position corresponds to the reference behavior information detected in the first area, that is, whether the behavior information is normal determine whether it is.
  • the predetermined position here may be any point (position) on the route when the vehicle 10 travels on the road. That is, the first determination unit 34 may extract behavior information detected at an arbitrary position as behavior information detected at a predetermined position from a plurality of pieces of behavior information detected at different positions.
  • the first area is an area of a predetermined area that overlaps the position (predetermined position) of the vehicle 10 when the behavior information is detected. good.
  • the unit area overlapping the position of the vehicle 10 when the behavior information is detected may be the first area. That is, the first determination unit 34 selects a unit area overlapping the position of the vehicle 10 indicated by the position information acquired by the behavior information acquisition unit 32 (the position information of the vehicle 10 when the behavior information is detected) as the first area. Based on the extracted reference behavior information and the behavior information detected in the first area, it is determined whether the behavior information is normal.
  • the first determination unit 34 selects a learned first AI model corresponding to the first area (a unit area that overlaps with the position of the vehicle 10 when the behavior information is detected) from among the plurality of first AI models M. 1 AI model M is read, and behavior information detected at a predetermined position is input to the read 1 AI model M. The first determination unit 34 acquires information indicating whether the behavior information output from the first AI model M is normal, and based on the acquired information, determines whether the behavior information detected at the predetermined position is normal. to judge.
  • the first determination unit 34 determines that the behavior information is normal, and the label indicating that the behavior information is not normal is output to the first AI model M1a.
  • the first AI model M1a it may be determined that the behavior information is not normal (abnormal).
  • the probability that the behavior information is normal is output from the first AI model M1a
  • the first determination unit 34 determines that the behavior information is normal when the probability that the behavior information is normal is equal to or greater than a predetermined value. If it is determined that the behavior information is normal and the probability that the behavior information is normal is less than a predetermined value, it may be determined that the behavior information is not normal.
  • the behavior information acquisition unit 32 also acquires the motion state of the vehicle 10 when the behavior information is detected, along with the behavior information and the position information.
  • the motion state is the speed of the vehicle 10 in this embodiment, and is detected by a speed sensor mounted on the vehicle 10 .
  • the first determination unit 34 extracts a motion state range (vehicle speed range) that overlaps with the motion state (vehicle speed) of the vehicle 10 when the behavior information is detected.
  • the first determination unit 34 reads out the first AI model M1 corresponding to the combination of the behavior sensor 10B that detected the behavior information, the unit area extracted as the first area, and the extracted motion state range.
  • the first determination unit 34 inputs the behavior information to the read first AI model M and determines whether the behavior information is normal.
  • the first determination unit 34 reads the first AI model M1a (see FIG. 4), inputs the behavior information to the first AI model M1a, and determines whether the behavior information is normal. Even when the first AI model M is used in this way, since the first AI model M is learned based on the reference behavior information, it can be said that whether the behavior information is normal is determined based on the reference behavior information.
  • the behavior information acquisition unit 32 acquires behavior information detected by each behavior sensor 10B at the same position.
  • the first judgment unit 34 judges whether or not each piece of behavior information detected by each behavior sensor 10B at the same predetermined position is normal. That is, for example, the first determination unit 34 determines the behavior information D1 detected by the behavior sensor 10B1 at a predetermined position, the behavior information D2 detected by the behavior sensor 10B2 at a predetermined position, and the behavior information D3 detected by the behavior sensor 10B3 at a predetermined position. , and the behavior information D4 detected by the behavior sensor 10B4 at a predetermined position, it is determined whether they are normal.
  • the determination of whether the behavior information is normal by the first determination unit 34 is not limited to using the first AI model M.
  • the first determination unit 34 may set a range of behavior information to be normal (normal range) based on the reference behavior information, and determine whether the behavior information is normal based on the normal range.
  • the first determination unit 34 determines that the behavior information is normal when the behavior information acquired by the behavior information acquisition unit 32 falls within the normal range, and determines that the behavior information is not normal when the behavior information does not fall within the normal range. I judge.
  • the normal range may be set by any method. For example, a 3 ⁇ range calculated from a plurality of pieces of reference behavior information based on a normal distribution method may be set as the normal range.
  • the normal range may be set for each combination of the unit area, the exercise state range, and the behavior sensor 10B that detected the behavior information, as in the first AI model M.
  • the first determination unit 34 reads the normal range set for the combination of the unit area corresponding to the behavior information, the exercise state range, and the behavior sensor 10B, and if the behavior information falls within the normal range, the behavior Judge the information as normal.
  • the second determination unit 36 determines behavior information detected in a second area that is wider than the first area and reference behavior information detected in the second area (before the behavior information detected in the second area). , second reference behavior information detected in the second region), and whether the behavior information detected in the second region corresponds to the reference behavior information, that is, the behavior detected in the second region Determine if the information is correct.
  • the second determination unit 36 determines whether the behavior information detected in the second area is normal when the first determination unit 34 determines that the behavior information detected at the predetermined position is not normal. In the present embodiment, when it is determined that all the behavior information detected by each behavior sensor 10B at the predetermined position is not normal, the second determination unit 36 determines that the behavior information detected within the second area is normal. determine whether it is That is, in the present embodiment, when it is determined that only the behavior information by some of the behavior sensors 10B at the predetermined position is not normal, it is not necessary to determine whether the behavior information detected within the second area is normal. good.
  • FIG. 5B is a schematic diagram explaining an example of the first area and the second area.
  • the second area may be an area at any position in the area where the vehicle 10 travels, as long as it is wider than the first area.
  • the second area preferably includes the entire first area and is wider than the first area.
  • the second area may be the entire area (entire section) traveled by the vehicle 10 .
  • FIG. 5B it is assumed that behavior information is obtained at predetermined positions Pa, Pb, Pc, and Pd in the area AR in which the vehicle 10 travels.
  • the first regions (unit regions) overlapping the predetermined positions Pa, Pb, Pc, and Pd can be said to be the first regions UR1a, UR1b, UR1c, and UR1d, respectively.
  • the second region UR2 is wider than the first regions UR1a, UR1b, UR1c, and UR1d, and is the entire first region UR1a, the entire first region UR1b, the entire first region UR1c, and It may be an area that includes the entire first area UR1d, or an area that includes the entire area AR in which the vehicle 10 travels.
  • the present invention is not limited to this, and for example, the second region UR2 may include only the entire regions of the first region UR1a and the second region UR1b.
  • the second determination unit 36 determines whether the behavior information is normal for each piece of behavior information detected at different positions in the second region, and determines whether the behavior information is normal based on the determination result. 2 It is determined whether the behavior information detected in the area is normal.
  • the second determination unit 36 may determine whether the behavior information for each position is normal by the same method as the determination method by the first determination unit 34 . That is, for example, the second determination unit 36 determines the behavior sensor 10B that detected the behavior information, the unit area that overlaps the position where the behavior information was detected, and the motion state range that overlaps the motion state when the behavior information was detected.
  • the second determination unit 36 acquires information indicating whether the behavior information is normal by the same method for each piece of behavior information detected at different positions in the second area. The second determination unit 36 determines whether the behavior information detected within the second area is normal based on the information indicating whether each behavior information is normal.
  • the second determination unit 36 determines that the behavior information detected in the second region, if the ratio of the behavior information determined to be normal in the first AI model M is equal to or greater than a predetermined value, If the behavior information detected in the first AI model M is determined to be normal, and if the ratio of the behavior information determined to be normal in the first AI model M is less than a predetermined value, the behavior information detected in the second region may be judged to be abnormal.
  • the second determination unit 36 may determine whether the behavior information detected by each behavior sensor 10B in the second region is normal, or may determine whether some of the behavior sensors 10B ( For example, only the behavior information detected by the behavior sensor 10B1) may be judged to be normal. Further, in the present embodiment, the second determination unit 36 detects behavior within the second region based on a plurality of pieces of behavior information detected within the second region and a plurality of pieces of reference behavior information detected within the second region. Determines whether the received behavior information is normal. However, the second determination unit 36 is not limited to using a plurality of pieces of behavior information and a plurality of pieces of reference behavior information. Based on one piece of reference behavior information, it may be determined whether the behavior information detected in the second area is normal.
  • the behavior information determined to be normal by the first determination unit 34 is data for one point at a predetermined position
  • the behavior information determined to be normal by the second determination unit 36 is , data of a plurality of points in the second region. That is, it can be said that the second determination unit 36 determines whether a larger number of pieces of behavior information are normal than the first determination unit 34 does.
  • the behavior information that is judged to be normal by the first judging unit 34 is not limited to data of one point, and may be data of a plurality of points. The number of pieces of behavior information for which it is determined whether or not is larger.
  • the determination of whether the behavior information is normal by the second determination unit 36 is not limited to using the first AI model M.
  • the second determination unit 36 may set a normal range and determine whether the behavior information is normal based on the normal range, as described for the first determination unit 34 . Further, the second determination unit 36 uses an AI model different from the first AI model M, which can determine whether the behavior information in the second area is normal, and determines whether the behavior information is normal. good.
  • behavior information detected at a predetermined position and reference behavior information detected in the first area and the second area will be described with reference to FIG. 5B.
  • behavior information is acquired at predetermined positions Pa, Pb, Pc, and Pd, and first regions (unit regions) UR1a, UR1b, UR1c, and UR1d overlap predetermined positions Pa, Pb, Pc, and Pd. This is the first area.
  • the first determination unit 34 treats the behavior information detected in the past at the position Pa′ in the first region UR1a as the reference behavior information detected in the first region UR1a, and treats the behavior information detected at the predetermined position Pa as It is determined whether the behavior information detected at the position Pa' corresponds to the reference behavior information detected at the position Pa'. Similarly, the first determination unit 34 determines whether the behavior information detected at each of the predetermined positions Pb, Pc, and Pd corresponds to the reference behavior information detected at each of the positions Pb', Pc', and Pd'. to judge. For example, when a plurality of pieces of reference behavior information are detected in one first area, the average value of the pieces of reference behavior information may be treated as the reference behavior information of the first area. That is, for example, the first determination unit 34 may determine whether the behavior information detected at the predetermined position Pa and the average value of the plurality of reference behavior information detected within the first region UR1a correspond to each other. .
  • the second determination unit 34 combines the behavior information detected in the past at each of the positions Pa', Pb', Pc', and Pd' with the reference behavior information detected in the second region UR2. , and it is determined whether the behavior information detected at the predetermined positions Pa, Pb, Pc, and Pd correspond to the reference behavior information detected at the positions Pa', Pb', Pc', and Pd'. In this way, by comparing a plurality of pieces of behavior information with a plurality of pieces of reference behavior information detected within the second region UR2, it is possible to appropriately determine whether the behavior information is normal over a wide region.
  • one piece of behavior information may be compared with one piece of reference behavior information detected within the second region UR2.
  • the second determination unit 34 calculates the average value of the reference behavior information detected at a plurality of positions within the second region UR2, such as the position Pa' and the position Pb'. It may be determined whether the behavior information detected at the predetermined position Pa and the average value of the reference behavior information correspond to each other by treating the reference behavior information as one piece of reference behavior information. In this way, by comparing one piece of behavior information and one piece of reference behavior information, it is possible to appropriately determine whether the behavior information is normal in an area close to the position Pa while considering a wide area.
  • the calculation unit 38 calculates the road surface condition by inputting the behavior information to the second AI model N that has undergone machine learning. By inputting the behavior information as input data to the second AI model N, the calculation unit 38 obtains, as output data from the second AI model N, an estimation result of the road surface condition at the position where the behavior information is detected. . The calculation unit 38 calculates the road surface condition based on the estimation result of the road surface condition acquired from the second AI model N. The calculation unit 38 calculates the road surface state by different methods according to the determination results of the first determination unit 34 and the second determination unit 36 . Each case will be described below.
  • the calculation unit 38 uses the existing second AI model N to determine the existing By inputting the behavior information to the second AI model N of , the road surface condition is calculated. That is, in this case, without updating the second AI model N using the behavior information detected this time, the existing second AI model N generated based on the previously detected reference behavior information is used to change the road. Calculate the road surface condition. In this way, when it is determined that all of the behavior information detected at the predetermined position is normal, the characteristics of the vehicle 10 and the behavior sensor 10B are not changed, and the second AI model N is updated. Instead, the existing learned second AI model N is used to estimate the road surface condition.
  • a plurality of second AI models N with different input behavior information are prepared.
  • the calculation unit 38 inputs the behavior information to each of the second AI models N, and acquires the estimation result of the road surface state output from each of the second AI models N.
  • the calculation unit 38 calculates the road surface condition based on the estimation result of the road surface condition output from each of the second AI models N. As shown in FIG. That is, it can be said that the calculation unit 38 is performing ensemble learning.
  • the calculation unit 38 inputs the behavior information D1, D2, D3, and D4 to the second AI model N1, and acquires the estimated value O1 of the road surface state, which is output data.
  • the computing unit 38 adds behavior information D1, D2, D3, behavior information D1, D2, D4, behavior information D1, D3, D4, behavior information D2, D3, D4, and behavior information D4 are input, and estimated road surface state values O2, O3, O4, O5, and On are output data. to get Then, the calculation unit 38 calculates the road surface condition based on each of the estimated values O1 to On. For example, the calculation unit 38 may calculate an estimated value R, which is the final estimation result of the road surface state, based on the following equation (1).
  • r1, r2, r3, r4, r5, . 2 is the number of AI models N;
  • the weighting coefficients r1 . . . rn may be set arbitrarily.
  • the average value of the values obtained by multiplying the estimated values by each second AI model N by the weighting factor is calculated as the estimated value of the road surface condition. Any method using the model N estimated value may be used to calculate the estimated value of the road surface condition.
  • the estimated values O1 to On for each of the second AI model N2 are not distinguished, they are referred to as an estimated value O, and when the weighting coefficients r1 to rn for each of the second AI model N2 are not distinguished, they are referred to as a weighting coefficient r. do.
  • the calculation unit 38 selects the behavior sensor 10B (abnormal sensor) determined to be abnormal.
  • the existing second AI model N is changed so that the degree of influence on the road surface state by the behavior information detected by is lower than the degree of influence on the road surface state by the behavior information detected by the behavior sensor 10B determined to be normal.
  • the road surface condition is calculated while using For example, when it is determined that the behavior information detected by the behavior sensor 10B1 is not normal and the behavior information detected by the behavior sensors 10B2, 10B3, and 10B4 is determined to be normal, the calculation unit 38 detects the behavior information of the behavior sensor 10B1.
  • the degree of influence of D1 on the estimated value R of the road surface condition is lower than the degree of influence of the estimated value O of the road surface condition by the behavior information D2, D3, and D4 of the behavior sensors 10B2, 10B3, and 10B4.
  • An estimated value R is calculated.
  • the degree of influence of the behavior information of the behavior sensor 10B is reduced on the assumption that the characteristics of the behavior sensor 10B have changed, and the road surface condition is determined. is estimated.
  • the road surface state is estimated with a reduced degree of influence from the behavior sensor 10B whose characteristics have changed, so that it is possible to suppress deterioration in the estimation accuracy of the road surface state.
  • the second AI model N is updated using the behavior information detected this time in the same way as when all the behavior information is determined to be normal. Instead, an existing second AI model N generated based on previously detected reference behavior information is used.
  • the output data (estimated value O) from the second AI model N (first model) whose input data is the behavior information of the behavior sensor 10B determined to be abnormal is set as the first estimation result, and if it is not normal, The output data (estimated value O) from the second AI model N (second model) whose input data is not the determined behavior information of the behavior sensor 10B is set as the second estimation result.
  • the calculation unit 38 obtains the first estimation result by inputting the behavior information of the behavior sensor 10B determined to be abnormal to the first model, and the behavior detected by the behavior sensor 10B determined to be abnormal.
  • the degree of influence of the first estimation result on the estimated value R of the road surface condition is the degree of influence of the second estimation result on the estimated value R of the road surface condition.
  • An estimated value R of the road surface condition is calculated so as to be lower than .
  • the calculation unit 38 calculates the weighting factor r for the estimated value O (first estimation result) of the second AI model N whose input data is the behavior information of the abnormal behavior sensor 10B.
  • the degree of influence on the estimated value R by the estimated values O1 to O4 of the second AI models N1 to N4 to which the behavior information of the behavior sensor 10B1 is input is determined by the second AI models N5 and Nn to which the behavior information of the behavior sensor 10B1 is not input. is lower than the degree of influence on the estimated value R due to the estimated value O5, On.
  • the method of reducing the degree of influence of the behavior information of the behavior sensor 10B determined to be abnormal is not limited to reducing the weighting factor r, and any method may be used.
  • the calculation unit 38 may correct the behavior information of the behavior sensor 10B determined to be abnormal, thereby reducing the degree of influence of the behavior information.
  • the calculation unit 38 may input the behavior information of the behavior sensor 10B determined to be abnormal to the second AI model N as an infinite value.
  • the calculation of the second AI model N to which the behavior information of the behavior sensor 10B determined to be abnormal is input becomes an error, and the estimated value O of the second AI model N becomes invalid, so the degree of influence should be reduced. becomes possible.
  • An estimated value R may be calculated.
  • the second determination unit 36 determines that the behavior information detected within the second area is normal. to judge whether When the behavior information detected in the second region is determined to be normal by the second determination unit 36, the calculation unit 38 inputs the behavior information to the existing second AI model N to determine the road surface condition. calculate. That is, in this case, similarly to the case where it is determined that all the behavior information detected at the predetermined position is normal, the behavior information detected this time is not used to update the second AI model N, and the second AI model N is updated before that.
  • the road surface condition is calculated using the existing second AI model N generated based on the detected reference behavior information.
  • the characteristics of the vehicle 10 and the behavior sensor 10B change.
  • the road surface condition is estimated by using the existing learned second AI model N without updating the second AI model N by judging that the road surface condition is changing at the point of the predetermined position. be done.
  • the learning unit 30 updates the second AI model N when the second determination unit 36 determines that the behavior information detected in the second area is not normal. Specifically, the learning unit 30 performs machine learning on the second AI model N using the behavior information as teacher data, and uses the behavior information detected in the second region as teacher data for the model used for estimating the road surface condition. Update to the second AI model N.
  • the behavior information used to update the second AI model N is a group of behavior information acquired when behavior information determined to be abnormal by the second determination unit 36 is detected (i.e., during the current run of the vehicle 10). A group of detected behavioral information), and can be said to be behavioral information detected in the second region acquired after reference behavioral information used for learning of the second AI model N before updating.
  • the learning unit 30 causes the second AI model N to perform machine learning by inputting a data set of behavior information and road surface conditions as teacher data into the unlearned second AI model N, thereby creating a new learned model.
  • the road surface condition used as teaching data for the new second AI model N may be the measured road surface condition used as teaching data for the second AI model N before updating, instead of the newly measured values. However, among the measured road surface conditions, road surface conditions that are judged to be unusable due to reasons such as old data need not be used as teacher data.
  • the calculation unit 38 uses the new second AI model N updated using the behavior information to estimate the road surface condition.
  • the behavior information detected this time is used to generate a new second AI model N, so it is not used to estimate the road surface state. That is, the calculation unit 38 inputs new behavior information detected after updating the second AI model N to the updated second AI model N, thereby estimating the road surface state.
  • the existing second AI model N is used because the characteristics of the entire vehicle 10 or the behavior sensor 10B have changed instead of the individual behavior sensor 10B. Therefore, the second AI model N is updated using the latest behavior information. Therefore, it becomes possible to use the second AI model N adapted to changes in the characteristics of the vehicle 10, and it is possible to suppress deterioration in the estimation accuracy of the road surface state.
  • the processing to reduce the degree of influence of the abnormal behavior sensor 10B is performed, and the behavior information detected at the predetermined position is If all of the behavior information is abnormal and the behavior information detected within the second area is not normal, a process of updating the second AI model N is performed. However, it is not limited to performing all of these processes, and at least some of these processes may be performed. That is, for example, only the process of updating the second AI model N may be performed when the behavior information detected at the predetermined position is not normal and the behavior information detected within the second area is not normal. Conversely, only the process of reducing the degree of influence of the abnormal behavior sensor 10B when part of the behavior information detected at a predetermined position is abnormal may be performed.
  • FIG. 6 is a flowchart for explaining the processing flow of the arithmetic device.
  • the computing device 14 acquires the behavior information by the behavior information acquisition unit 32 (step S10), and the first determination unit 34 determines that all the behavior information detected at the predetermined position is normal. (step S12). If it is determined that all the behavior information detected at the predetermined position is normal (step S12; Yes), the arithmetic device 14 allows the arithmetic unit 38 to update the existing second AI model N without updating the second AI model N. By using the model N, that is, by inputting the behavior information to the existing second AI model N, the estimation result of the road surface state is obtained (step S14).
  • step S18 the calculation unit 38 , using the existing second AI model N, that is, by inputting the behavior information to the existing second AI model N, while reducing the degree of influence on the road surface state of the behavior information of the behavior sensor determined to be abnormal.
  • a state estimation result is acquired (step S18).
  • step S12 when all the behavior information detected at the predetermined position is not normal (step S12; No) and only a part of the behavior information at the predetermined position is not abnormal (step S16; No), that is, when it is detected at the predetermined position If all of the behavior information obtained is abnormal, the arithmetic device 14 determines whether the behavior information detected in the second area is normal by the second determination unit 36 (step S20). If the behavior information detected in the second area is determined to be normal (step S20; Yes), the process proceeds to step S14, and the calculation unit 38 uses the existing second AI model N to acquire the estimation result of the road surface condition. (step S14). On the other hand, when it is determined that the behavior information detected in the second area is not normal (step S20; No), the arithmetic unit 14 updates the second AI model N based on the behavior information (step S22).
  • the computing device 14 includes a behavior information acquisition section 32 , a first determination section 34 , a second determination section 36 and a learning section 30 .
  • the behavior information acquisition unit 32 acquires behavior information indicating the behavior of the vehicle 10 detected by the behavior sensor 10B provided in the vehicle 10 traveling on the road.
  • the first determination unit 34 determines the behavior of the vehicle indicated by the behavior information detected at the predetermined position and the first determination unit 34 . It is determined whether or not the behavior of the vehicle indicated by the reference behavior information detected in one area corresponds.
  • the reference behavior information refers to the behavior of the vehicle 10 detected prior to the behavior information.
  • the second determination unit 36 selects the first region.
  • vehicle behavior indicated by the behavior information detected in the second region based on a plurality of pieces of behavior information detected in a second region wider than the second region and a plurality of pieces of reference behavior information detected in the second region; It is determined whether or not the behavior of the vehicle indicated by the reference behavior information corresponds.
  • the learning unit 30 causes the second AI model N (AI model) to machine-learn the correspondence relationship between the behavior of the vehicle 10 and the road surface condition.
  • the learning unit 30 changes the behavior information detected in the second area.
  • the second AI model N is updated as teacher data.
  • the AI model is learned. Since the tendency of the vehicle behavior will change, there is a possibility that the estimation accuracy of the road surface state using the AI model will decrease.
  • the arithmetic device 14 if behavior information detected at a predetermined position is not normal and a plurality of pieces of behavior information detected in a wider second area are not normal, the arithmetic device 14 according to the present embodiment The behavior information detected at that time is used to update the second AI model N. Therefore, according to the present embodiment, it is possible to appropriately detect that the characteristics of the vehicle 10 have changed, and to update the second AI model N when the characteristics of the vehicle 10 have changed. It is possible to suppress the deterioration of the estimation accuracy of the road surface condition.
  • the update of the second AI model N using the behavior information is not executed. That is, in the present embodiment, when the behavior information detected at the predetermined position and the second area is normal, the existing second AI model N is used because the characteristics of the vehicle 10 have not changed, and If the behavior information detected in the second area is not normal, it is determined that the characteristics of the vehicle 10 have changed, and the second AI model N is updated. Therefore, according to the present embodiment, it is possible to use the second AI model N adapted to changes in the characteristics of the vehicle 10, thereby suppressing a decrease in the estimation accuracy of the road surface state.
  • the first determination unit 34 and the second determination unit 36 input the behavior information to the first AI model M (learning model) capable of determining whether the behavior information corresponds to the reference behavior information, thereby Determine whether the information corresponds to reference behavior information.
  • the first AI model M learning model
  • the accuracy of behavior information abnormality determination can be improved, and characteristic changes of the vehicle 10 can be detected with high accuracy.
  • the arithmetic device 14 further includes an arithmetic unit 38 that inputs the behavior information to the second AI model N and obtains the estimation result of the road surface condition.
  • an arithmetic unit 38 that inputs the behavior information to the second AI model N and obtains the estimation result of the road surface condition.
  • the behavior information acquisition unit 32 acquires behavior information detected by each of the plurality of behavior sensors 10B provided in the vehicle 10 .
  • the first determination unit 34 determines whether the behavior information detected at the predetermined position and the reference behavior information detected in the first area correspond to each of the plurality of behavior sensors 10B.
  • the calculation unit 38 detects the road surface state based on the behavior information of the behavior sensor 10B (abnormal sensor) whose behavior information does not correspond to the reference behavior information.
  • the estimation result of the road surface condition is obtained so that the degree of influence on the road surface condition is lower than the degree of influence on the road surface condition by the behavior information of the behavior sensor 10B other than the abnormality sensor.
  • the degree of influence of the behavior sensors 10B is reduced, so it is possible to suppress deterioration in the estimation accuracy of the road surface state.
  • the second AI model N is the first model to which the behavior information detected by the behavior sensor 10B (abnormal sensor) whose behavior information is not normal is input, and the behavior information detected by the behavior sensor 10B other than the abnormal sensor is input. and a second model that is The calculation unit 38 inputs the first estimation result of the road surface state obtained by inputting the behavior information detected by the abnormality sensor into the first model, and the behavior information detected by the behavior sensor other than the abnormality sensor into the second model.
  • the degree of influence of the first estimation result on the estimation result of the road surface condition is higher than the degree of influence of the second estimation result on the estimation result of the road surface condition based on the second estimation result of the road surface condition acquired by inputting
  • the estimation result of the road surface condition is calculated so as to be low. According to the present embodiment, when the characteristics of some of the behavior sensors 10B have changed, the road surface condition It is possible to suppress the decrease in the estimation accuracy of

Abstract

The present invention inhibits a decrease in inference accuracy of a road surface condition. This calculation device comprises: a first determination unit (34) that, on the basis of behavior information detected in a predetermined position and first reference behavior information detected in a first region overlapping the predetermined position, determines whether the behavior information detected in the predetermined position corresponds to the first reference behavior information; a second determination unit (36) that, if the behavior information detected in the predetermined position is determined not to correspond to the first reference behavior information, on the basis of behavior information detected in a second region encompassing the entire first region and broader than the first region and second reference behavior information detected in the second region, determines whether the behavior information detected in the second region corresponds to the second reference behavior information; and a learning unit (30) that causes an AI model to machine-learn. If the behavior information detected in the second region is determined not to correspond to the second reference behavior information, the learning unit (30) updates the AI model with the behavior information detected in the second region as teacher data.

Description

演算装置、演算方法及びプログラムArithmetic device, arithmetic method and program
 本発明は、演算装置、演算方法及びプログラムに関する。 The present invention relates to an arithmetic device, an arithmetic method and a program.
 例えば特許文献1に示すように、道路を走行している車両の加速度を検出し、その加速度データを学習モデルに入力することで、路面状態を推定する技術が知られている。 For example, as shown in Patent Document 1, there is known a technique for estimating road surface conditions by detecting the acceleration of a vehicle traveling on a road and inputting the acceleration data into a learning model.
特開2020-86960号公報Japanese Patent Application Laid-Open No. 2020-86960
 しかし、車両挙動を検出する側の特性が変化した場合には、学習モデルを用いた路面状態の推定精度が低下するおそれがある。本発明は、上記に鑑みてなされたものであって、路面状態の推定精度の低下を抑制可能な演算装置、演算方法及びプログラムを提供することを目的とする。 However, if the characteristics on the side that detects vehicle behavior change, there is a risk that the accuracy of road surface condition estimation using the learning model will decrease. SUMMARY OF THE INVENTION It is an object of the present invention to provide an arithmetic device, an arithmetic method, and a program capable of suppressing deterioration in accuracy of road surface condition estimation.
 上述した課題を解決し、目的を達成するために、本開示に係る演算装置は、道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得する挙動情報取得部と、所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断する第1判断部と、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断する第2判断部と、前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させる学習部と、を有し、前記学習部は、前記第2領域で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合には、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する。 In order to solve the above-described problems and achieve the object, an arithmetic device according to the present disclosure acquires behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road. an acquisition unit, the behavior information detected at a predetermined position, and a first reference that is behavior of the vehicle detected in a first region overlapping the predetermined position before the behavior information detected at the predetermined position. a first determination unit that determines whether vehicle behavior indicated by the behavior information detected at the predetermined position corresponds to vehicle behavior indicated by the first reference behavior information, based on the behavior information; When it is determined that the behavior of the vehicle indicated by the behavior information detected in the above does not correspond to the behavior of the vehicle indicated by the first reference behavior information, the entire area of the first area is included, and more than the first area. Based on the behavior information detected within a wide second area and second reference behavior information detected within the second area prior to the behavior information detected within the second area, the a second judgment unit for judging whether the behavior of the vehicle indicated by the behavior information detected in the second area corresponds to the behavior of the vehicle indicated by the second reference behavior information; a learning unit that causes the AI model to perform machine learning of a correspondence relationship with a road surface state, wherein the learning unit is configured such that the behavior of the vehicle indicated by the behavior information detected in the second area and the second reference behavior information are If it is determined that the behavior of the vehicle shown does not correspond, the AI model is updated using the behavior information detected in the second area as teacher data.
 上述した課題を解決し、目的を達成するために、本開示に係る演算方法は、道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得するステップと、所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、前記所定位置で検出される前記挙動情報と前記第1参照挙動情報とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させるステップと、を有し、前記AIモデルに機械学習させるステップにおいては、前記第2領域で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合には、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する。 In order to solve the above-described problems and achieve the object, a calculation method according to the present disclosure includes a step of obtaining behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road; , the behavior information detected at a predetermined position; and first reference behavior information, which is behavior of the vehicle detected in a first region overlapping the predetermined position before the behavior information detected at the predetermined position. determining whether the behavior of the vehicle indicated by the behavior information detected at the predetermined position corresponds to the behavior of the vehicle indicated by the first reference behavior information based on the behavior detected at the predetermined position; When it is determined that the information and the first reference behavior information do not correspond, the behavior information detected in a second area that includes the entire first area and is wider than the first area; Vehicle behavior indicated by the behavior information detected in the second area based on second reference behavior information detected in the second area prior to the behavior information detected in the second area. determining whether the behavior of the vehicle indicated by the second reference behavior information corresponds to the behavior of the vehicle; and causing an AI model to perform machine learning of the correspondence relationship between the behavior of the vehicle and the road surface condition of the road. and in the step of causing the AI model to perform machine learning, when it is determined that the behavior of the vehicle indicated by the behavior information detected in the second region does not correspond to the behavior of the vehicle indicated by the second reference behavior information. updates the AI model using the behavior information detected in the second area as teacher data.
 上述した課題を解決し、目的を達成するために、本開示に係るプログラムは、道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得するステップと、所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、前記所定位置で検出される前記挙動情報と前記第1参照挙動情報とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させるステップと、をコンピュータに実行させるプログラムであって、前記AIモデルに機械学習させるステップにおいては、前記第2領域における前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合に、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する。 In order to solve the above-described problems and achieve the object, a program according to the present disclosure acquires behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road; the behavior information detected at a predetermined position; and first reference behavior information, which is vehicle behavior detected in a first region overlapping the predetermined position, before the behavior information detected at the predetermined position. determining whether the vehicle behavior indicated by the behavior information detected at the predetermined position and the vehicle behavior indicated by the first reference behavior information correspond to each other; and the behavior information detected at the predetermined position. and the first reference behavior information do not correspond, the behavior information detected in a second region that includes the entire first region and is wider than the first region, and the second vehicle behavior indicated by the behavior information detected in the second area based on second reference behavior information detected in the second area prior to the behavior information detected in the area; A computer executes a step of determining whether or not the behavior of the vehicle indicated by the second reference behavior information corresponds, and a step of allowing an AI model to perform machine learning of the correspondence relationship between the behavior of the vehicle and the surface condition of the road. In the step of causing the AI model to perform machine learning, it is determined that the behavior of the vehicle indicated by the behavior information in the second region does not correspond to the behavior of the vehicle indicated by the second reference behavior information. In this case, the AI model is updated using the behavior information detected in the second area as teacher data.
 本発明によれば、路面状態の推定精度の低下を抑制できる。  According to the present invention, it is possible to suppress the deterioration of the estimation accuracy of the road surface condition.
図1は、本実施形態に係る検出システムの模式的なブロック図である。FIG. 1 is a schematic block diagram of a detection system according to this embodiment. 図2は、車両の模式図である。FIG. 2 is a schematic diagram of a vehicle. 図3は、演算装置の模式的なブロック図である。FIG. 3 is a schematic block diagram of an arithmetic device. 図4は、第1AIモデルの種類の一例を示す模式図である。FIG. 4 is a schematic diagram showing an example of types of the first AI model. 図5Aは、第2AIモデルの種類の一例を示す模式図である。FIG. 5A is a schematic diagram showing an example of types of the second AI model. 図5Bは、第1領域及び第2領域の一例を説明する模式図である。FIG. 5B is a schematic diagram illustrating an example of the first area and the second area. 図6は、演算装置の処理フローを説明するフローチャートである。FIG. 6 is a flowchart for explaining the processing flow of the arithmetic device.
 以下に、本発明の好適な実施形態を図面に基づいて詳細に説明する。なお、以下に説明する実施形態により本発明が限定されるものではない。 Preferred embodiments of the present invention will be described in detail below based on the drawings. In addition, this invention is not limited by embodiment described below.
 (検出システム)
 図1は、本実施形態に係る検出システムの模式的なブロック図である。図1に示すように、本実施形態に係る検出システム1は、車両10と、測定データ取得装置12と、演算装置14とを含む。検出システム1は、道路を走行する車両10の挙動情報に基づき、その道路の路面状態を判定するシステムである。挙動情報とは、道路を走行中の車両10の挙動を示す情報であり、詳しくは後述する。検出システム1は、演算装置14によって、挙動情報に基づいて道路の路面状態を算出することで、道路の路面状態を推定する。路面状態は、本実施形態では路面の凹凸度合いを示す指標である。より詳しくは、本実施形態においては、路面状態とは、IRI(International Roughness Index;国際ラフネス指数)である。ただし、路面状態は、IRIに限られず、路面の状態を示す任意の指標であってよい。例えば、路面状態は、IRI、路面の平たん性、ひび割れ、わだち掘れ、及びMCI(Meintenance Control Index)の少なくとも1つであってもよい。
(detection system)
FIG. 1 is a schematic block diagram of a detection system according to this embodiment. As shown in FIG. 1 , the detection system 1 according to this embodiment includes a vehicle 10 , a measurement data acquisition device 12 and an arithmetic device 14 . The detection system 1 is a system that determines the road surface condition based on the behavior information of the vehicle 10 traveling on the road. Behavior information is information that indicates the behavior of the vehicle 10 traveling on a road, and will be described later in detail. The detection system 1 estimates the road surface condition by calculating the road surface condition based on the behavior information using the arithmetic unit 14 . The road surface condition is an index indicating the degree of unevenness of the road surface in this embodiment. More specifically, in this embodiment, the road surface condition is the IRI (International Roughness Index). However, the road surface condition is not limited to the IRI, and may be any index indicating the road surface condition. For example, the road surface condition may be at least one of IRI, road surface flatness, cracks, rutting, and MCI (Maintenance Control Index).
 検出システム1においては、車両10が、道路を走行しながら挙動情報及び位置情報を検出し、検出した挙動情報及び位置情報を測定データ取得装置12に送信する。測定データ取得装置12は、例えば道路を管理する主体に管理される装置(コンピュータ)である。測定データ取得装置12は、車両10から送信された挙動情報及び位置情報を、演算装置14に送信する。演算装置14は、測定データ取得装置12から送信された挙動情報及び位置情報に基づき、道路の路面状態を推定する。そして、演算装置14は、路面状態の推定結果を、測定データ取得装置12に送信する。このように、演算装置14は、測定データ取得装置12を介して挙動情報及び位置情報を取得するが、それに限られない。例えば、検出システム1は、測定データ取得装置12が設けられておらず、演算装置14が、車両10から直接挙動情報及び位置情報を取得してもよい。 In the detection system 1 , the vehicle 10 detects behavior information and position information while traveling on a road, and transmits the detected behavior information and position information to the measurement data acquisition device 12 . The measurement data acquisition device 12 is, for example, a device (computer) managed by an entity that manages roads. The measurement data acquisition device 12 transmits the behavior information and position information transmitted from the vehicle 10 to the arithmetic device 14 . The calculation device 14 estimates the road surface condition based on the behavior information and position information transmitted from the measurement data acquisition device 12 . The calculation device 14 then transmits the estimation result of the road surface condition to the measurement data acquisition device 12 . In this way, the computing device 14 acquires behavior information and position information via the measurement data acquisition device 12, but is not limited to this. For example, the detection system 1 may not be provided with the measurement data acquisition device 12 and the arithmetic device 14 may directly acquire behavior information and position information from the vehicle 10 .
 (車両)
 図2は、車両の模式図である。図2に示すように、車両10は、位置センサ10Aと、挙動センサ10Bと、測定装置10Cとを備える。位置センサ10Aは、車両10の位置情報を取得するセンサである。車両10の位置情報とは、車両10の地球座標を示す情報である。位置センサ10Aは、本実施形態ではGNSS(Global Navivation Satelite System)用のモジュールである。なお、図2におけるZ方向は、鉛直方向の上方を指し、図2は鉛直方向上方から車両10を見た場合の模式図といえる。
(vehicle)
FIG. 2 is a schematic diagram of a vehicle. As shown in FIG. 2, the vehicle 10 includes a position sensor 10A, a behavior sensor 10B, and a measuring device 10C. The position sensor 10A is a sensor that acquires position information of the vehicle 10 . The positional information of the vehicle 10 is information indicating the earth coordinates of the vehicle 10 . Position sensor 10A is a module for GNSS (Global Navigation Satellite System) in this embodiment. Note that the Z direction in FIG. 2 indicates upward in the vertical direction, and FIG. 2 can be said to be a schematic diagram when the vehicle 10 is viewed from above in the vertical direction.
 挙動センサ10Bは、車両10の挙動を示す挙動情報を検出するセンサである。挙動情報は、道路を走行中の車両10の挙動を示す情報であれば任意の情報であってよい。本実施形態では、挙動センサ10Bは、車両10の加速度を挙動情報として検出することが好ましい。この場合、挙動センサ10Bは、加速度を検出する加速度センサであり、より好ましくは3軸での加速度を検出する加速度センサである。また、挙動センサ10Bが検出する挙動情報は、加速度であることに限られず、例えば、加速度、車両10の周囲を撮像した画像データ、車両10の速度、車両10の角速度、車両10のステアリング角度、車両10のブレーキ量、車両10のワイパの動作、及び車両10のサスペンションの作動量の少なくとも1つであってよい。なお、車両10の周囲の画像データは、車両10の動きによって変化するため、車両10の挙動を示す情報であるといえる。車両10の周囲の撮像画像を検出する挙動センサ10Bは例えばカメラであり、車両10の速度を検出する挙動センサ10Bは例えば速度センサであり、車両10の速度を検出する挙動センサ10Bは例えば3軸ジャイロセンサであり、車両10のステアリング角度を検出する挙動センサ10Bは例えばステアリングセンサであり、車両10のブレーキ量を検出する挙動センサ10Bは例えばブレーキセンサであり、車両10のワイパの動作を検出する挙動センサ10Bは例えばワイパセンサが挙げられ、車両10のサスペンションの作動量を検出する挙動センサ10Bは例えばサスペンションセンサが挙げられる。 The behavior sensor 10B is a sensor that detects behavior information that indicates the behavior of the vehicle 10. The behavior information may be any information as long as it indicates the behavior of the vehicle 10 traveling on the road. In this embodiment, the behavior sensor 10B preferably detects the acceleration of the vehicle 10 as behavior information. In this case, the behavior sensor 10B is an acceleration sensor that detects acceleration, more preferably an acceleration sensor that detects acceleration on three axes. Also, the behavior information detected by the behavior sensor 10B is not limited to acceleration. It may be at least one of an amount of braking of the vehicle 10, an operation amount of the wipers of the vehicle 10, and an operation amount of the suspension of the vehicle 10. Since the image data around the vehicle 10 changes according to the movement of the vehicle 10 , it can be said that the image data represents the behavior of the vehicle 10 . The behavior sensor 10B that detects the captured image around the vehicle 10 is, for example, a camera. The behavior sensor 10B that detects the speed of the vehicle 10 is, for example, a speed sensor. The behavior sensor 10B that is a gyro sensor and detects the steering angle of the vehicle 10 is, for example, a steering sensor, and the behavior sensor 10B that detects the braking amount of the vehicle 10 is, for example, a brake sensor that detects the operation of the wipers of the vehicle 10. An example of the behavior sensor 10B is a wiper sensor, and an example of the behavior sensor 10B that detects the amount of operation of the suspension of the vehicle 10 is a suspension sensor.
 本実施形態では、車両10には、複数の挙動センサ10Bが搭載されている。それぞれの挙動センサ10Bは、車両10において、互いに異なる位置に搭載されている。図2の例では、挙動センサ10Bとして、左側の前輪であるタイヤTR1のZ方向側(鉛直方向上方向側)に設けられる挙動センサ10B1と、右側の前輪であるタイヤTR2のZ方向側に設けられる挙動センサ10B2と、左側の後輪であるタイヤTR3のZ方向側に設けられる挙動センサ10B3と、右側の後輪であるタイヤTR4のZ方向側に設けられる挙動センサ10B4とを含む。ただし、挙動センサ10Bの設けられる位置は任意である。また、挙動センサ10Bの数も、4つであることに限られず、任意の数であってよい。また、図2の例ではタイヤTRの数は4つであるが、その数は任意であり、例えば、2つ以上の任意の数であってよい。また、図2の例では、挙動センサ10B1~10B4は、同じ種類の挙動情報(ここでは加速度)を検出するものであるが、それぞれの挙動センサ10Bは、異なる種類の挙動情報を検出するものであってよい。例えば、同じ種類の挙動情報を検出する複数の挙動センサ10B(例えば複数の加速度センサ)と、それとは異なる挙動情報を検出する挙動センサ10B(例えば速度センサ)とを設けてもよい。 In this embodiment, the vehicle 10 is equipped with a plurality of behavior sensors 10B. Each behavior sensor 10B is mounted at a position different from each other in the vehicle 10 . In the example of FIG. 2, as the behavior sensor 10B, a behavior sensor 10B1 is provided on the Z direction side (vertical direction upward side) of the left front wheel tire TR1, and a behavior sensor 10B1 is provided on the Z direction side of the right front wheel tire TR2. a behavior sensor 10B3 provided on the Z direction side of the left rear wheel tire TR3; and a behavior sensor 10B4 provided on the Z direction side of the right rear wheel tire TR4. However, the position where the behavior sensor 10B is provided is arbitrary. Also, the number of behavior sensors 10B is not limited to four, and may be any number. Moreover, although the number of tires TR is four in the example of FIG. Also, in the example of FIG. 2, the behavior sensors 10B1 to 10B4 detect the same type of behavior information (here, acceleration), but each behavior sensor 10B detects different types of behavior information. It's okay. For example, multiple behavior sensors 10B (for example, multiple acceleration sensors) that detect the same type of behavior information and behavior sensors 10B (for example, speed sensors) that detect different behavior information may be provided.
 以下、挙動センサ10B1、10B2、10B3、10B4が検出した挙動情報を、適宜、挙動情報D1、D2、D3、D4と記載する。 The behavior information detected by the behavior sensors 10B1, 10B2, 10B3, and 10B4 is hereinafter referred to as behavior information D1, D2, D3, and D4 as appropriate.
 測定装置10Cは、位置センサ10A及び挙動センサ10Bを制御して車両10の位置情報と挙動情報を検出させて、検出させた位置情報と挙動情報とを記録する装置である。すなわち、測定装置10Cは、車両10の位置情報と挙動情報とを記録するデータロガーとして機能する。測定装置10Cは、コンピュータであるとも言え、制御部10C1と、記憶部10C2と、通信部10C3とを含む。制御部10C1は、演算装置、すなわちCPU(Central Processing Unit)である。記憶部10C2は、制御部10C1の演算内容やプログラム、車両10の位置情報及び挙動情報などの各種情報を記憶するメモリであり、例えば、RAM(Random Access Memory)と、ROM(Read Only Memory)のような主記憶装置と、フラッシュメモリやHDD(Hard Disk Drive)などの不揮発性の記憶装置のうち、少なくとも1つ含む。なお、記憶部10C2が保存する制御部10C1用のプログラムは、測定装置10Cが読み取り可能な記録媒体に記憶されていてもよい。通信部10C3は、外部の装置と通信を行う通信モジュールであり、例えばアンテナなどである。 The measuring device 10C is a device that controls the position sensor 10A and the behavior sensor 10B to detect position information and behavior information of the vehicle 10, and records the detected position information and behavior information. That is, the measuring device 10C functions as a data logger that records position information and behavior information of the vehicle 10 . The measuring device 10C can also be said to be a computer, and includes a control section 10C1, a storage section 10C2, and a communication section 10C3. The control unit 10C1 is an arithmetic unit, that is, a CPU (Central Processing Unit). The storage unit 10C2 is a memory that stores various types of information such as the calculation contents and programs of the control unit 10C1, the position information and behavior information of the vehicle 10, and includes, for example, a RAM (Random Access Memory) and a ROM (Read Only Memory). and at least one of non-volatile storage devices such as flash memory and HDD (Hard Disk Drive). Note that the program for the control unit 10C1 stored in the storage unit 10C2 may be stored in a recording medium readable by the measuring device 10C. The communication unit 10C3 is a communication module that communicates with an external device, such as an antenna.
 制御部10C1は、記憶部10C2に記憶されたプログラムを読み出して、位置センサ10A及び挙動センサ10Bの制御を実行する。制御部10C1は、車両10が道路を走行中に、所定時間ごとに、挙動センサ10Bに車両10の挙動情報を検出させる。また、制御部10C1は、位置センサ10Aに、挙動センサ10Bが挙動情報を検出したタイミングにおける車両10の位置情報を検出させる。すなわち、制御部10C1は、車両10が所定時間走行するたびに、挙動センサ10Bに車両10の挙動情報を検出させ、位置センサ10Aに車両10の位置情報を検出させる。ここでの所定時間とは、例えば1分など、一定の時間であることが好ましいが、所定時間は一定の時間であることに限られず、任意の長さであってよい。すなわち、所定時間は都度変化してもよい。 The control unit 10C1 reads the program stored in the storage unit 10C2 and controls the position sensor 10A and the behavior sensor 10B. Control unit 10C1 causes behavior sensor 10B to detect behavior information of vehicle 10 at predetermined time intervals while vehicle 10 is traveling on the road. Further, the control unit 10C1 causes the position sensor 10A to detect the position information of the vehicle 10 at the timing when the behavior sensor 10B detects the behavior information. That is, control unit 10C1 causes behavior sensor 10B to detect behavior information of vehicle 10 and causes position sensor 10A to detect position information of vehicle 10 each time vehicle 10 travels for a predetermined period of time. The predetermined time here is preferably a fixed time such as one minute, but the predetermined time is not limited to a fixed time and may be any length. That is, the predetermined time may change each time.
 制御部10C1は、挙動情報と位置情報とを、通信部10C3を介して、測定データ取得装置12に送信する。測定データ取得装置12は、車両10から受信した挙動情報と位置情報とを、演算装置14に送信する。なお、測定データ取得装置12を設けない場合は、制御部10C1は、挙動情報と位置情報とを、演算装置14に直接送信してもよい。 The control unit 10C1 transmits the behavior information and the position information to the measurement data acquisition device 12 via the communication unit 10C3. The measurement data acquisition device 12 transmits the behavior information and position information received from the vehicle 10 to the arithmetic device 14 . If the measurement data acquisition device 12 is not provided, the control section 10C1 may directly transmit the behavior information and the position information to the arithmetic device 14. FIG.
 図3は、演算装置の模式的なブロック図である。図3に示すように、演算装置14は、例えばコンピュータであり、通信部20と、記憶部22と、制御部24とを含む。通信部20は、外部の装置と通信を行う通信モジュールであり、例えばアンテナなどである。記憶部22は、制御部24の演算内容やプログラム、後述する第1AIモデルMや第2AIモデルNなどの各種情報を記憶するメモリであり、例えば、RAMと、ROMのような主記憶装置と、フラッシュメモリやHDDなどの不揮発性の記憶装置のうち、少なくとも1つを含む。なお、記憶部22が保存する制御部24用のプログラムや第1AIモデルMや第2AIモデルNは、演算装置14が読み取り可能な記録媒体に記憶されていてもよい。 FIG. 3 is a schematic block diagram of an arithmetic unit. As shown in FIG. 3, the computing device 14 is, for example, a computer, and includes a communication section 20, a storage section 22, and a control section 24. The communication unit 20 is a communication module that communicates with an external device, such as an antenna. The storage unit 22 is a memory that stores various information such as calculation contents and programs of the control unit 24, a first AI model M and a second AI model N, which will be described later. At least one of non-volatile storage devices such as flash memory and HDD is included. Note that the program for the control unit 24 and the first AI model M and the second AI model N stored in the storage unit 22 may be stored in a recording medium readable by the computing device 14 .
 制御部24は、演算装置であり、例えばCPU(Central Processing Unit)などの演算回路を含む。制御部24は、学習部30と、挙動情報取得部32と、第1判断部34と、第2判断部36と、演算部38とを含む。制御部24は、記憶部22からプログラム(ソフトウェア)を読み出して実行することで、学習部30と挙動情報取得部32と第1判断部34と第2判断部36と演算部38とを実現して、それらの処理を実行する。なお、制御部24は、1つのCPUによってこれらの処理を実行してもよいし、複数のCPUを備えて、それらの複数のCPUで、処理を実行してもよい。また、学習部30と挙動情報取得部32と第1判断部34と第2判断部36と演算部38との少なくとも一部を、ハードウェアで実現してもよい。 The control unit 24 is an arithmetic device and includes an arithmetic circuit such as a CPU (Central Processing Unit). Control unit 24 includes learning unit 30 , behavior information acquisition unit 32 , first determination unit 34 , second determination unit 36 , and calculation unit 38 . The control unit 24 implements the learning unit 30, the behavior information acquisition unit 32, the first determination unit 34, the second determination unit 36, and the calculation unit 38 by reading and executing a program (software) from the storage unit 22. to perform those operations. Note that the control unit 24 may execute these processes by one CPU, or may be provided with a plurality of CPUs and may execute the processes by the plurality of CPUs. Moreover, at least a part of the learning unit 30, the behavior information acquisition unit 32, the first determination unit 34, the second determination unit 36, and the calculation unit 38 may be realized by hardware.
 (学習部)
 (第1AIモデルの学習)
 学習部30は、未学習の第1AIモデルMに機械学習させることで、学習済みの第1AIモデルMを生成する。学習部30は、車両10の挙動情報を第1AIモデルMに入力すると、第1AIモデルMから、その挙動情報が正常であるかを示す情報が出力されるように、第1AIモデルMに機械学習させる。すなわち、第1AIモデルMは、挙動情報の異常検知を行うAI(Artificial Intelligence)モデルである。第1AIモデルMとしては任意のモデルを用いてよいが、例えば、ロジスティック回帰分析を行うモデルや、CNN(Conventional Neural Network:畳み込みニューラルネットワーク)モデルなどが用いられてよい。
(learning department)
(Learning of the first AI model)
The learning unit 30 generates a learned first AI model M by subjecting the untrained first AI model M to machine learning. When the behavior information of the vehicle 10 is input to the first AI model M, the learning unit 30 performs machine learning on the first AI model M so that information indicating whether the behavior information is normal is output from the first AI model M. Let That is, the first AI model M is an AI (Artificial Intelligence) model that detects anomalies in behavior information. Any model may be used as the first AI model M. For example, a model that performs logistic regression analysis, a CNN (Conventional Neural Network) model, or the like may be used.
 挙動情報が正常であるとは、その挙動情報が示す車両10の挙動が、その挙動情報よりも前に検出された車両10の挙動情報である参照挙動情報が示す車両10の挙動に対応していることを意味する。言い換えれば、挙動情報が正常であるとは、その挙動情報が示す車両10の挙動が、参照挙動情報が示す車両10の挙動と同質であることを意味し、例えば挙動情報が示す車両10の挙動が、参照挙動情報が示す車両10の挙動に類似していることを意味する。また、挙動情報が正常でない(異常である)とは、その挙動情報が参照挙動情報に対応していないことを意味し、言い換えれば、その挙動情報が参照挙動情報と異質であることを意味し、例えば挙動情報が参照挙動情報に類似しないことを意味する。参照挙動情報は、挙動情報を検出して路面状態を推定する前に、車両10に同じ道路を走行させて検出されたデータ(本実施形態では加速度)であり、例えば車両10や挙動センサ10Bが劣化する前の初期の挙動情報であるといえる。 When the behavior information is normal, the behavior of the vehicle 10 indicated by the behavior information corresponds to the behavior of the vehicle 10 indicated by the reference behavior information, which is the behavior information of the vehicle 10 detected before the behavior information. means that there is In other words, that the behavior information is normal means that the behavior of the vehicle 10 indicated by the behavior information is of the same quality as the behavior of the vehicle 10 indicated by the reference behavior information. means that the behavior of the vehicle 10 is similar to that indicated by the reference behavior information. Further, behavior information is not normal (abnormal) means that the behavior information does not correspond to the reference behavior information, in other words, that the behavior information is different from the reference behavior information. , for example, means that the behavior information is not similar to the reference behavior information. The reference behavior information is data (acceleration in this embodiment) detected by causing the vehicle 10 to travel on the same road before detecting the behavior information and estimating the road surface condition. It can be said that it is the initial behavior information before deterioration.
 学習部30は、参照挙動情報と、その参照挙動情報が正常である旨を示すラベルとのデータセットを、教師データとして設定する。すなわち、学習部30は、正常データを教師データとして設定する。学習部30は、教師データとしての、参照挙動情報とラベルとのデータセットを、複数準備する。学習部30は、参照挙動情報とラベルとのデータセット(教師データ)を、未学習の第1AIモデルMに入力して、挙動情報が入力された場合にその挙動情報が正常であるかの情報を出力可能となるように(挙動情報に正常であるかのラベル付けができるように)、第1AIモデルMを機械学習させる。 The learning unit 30 sets a data set of reference behavior information and a label indicating that the reference behavior information is normal as teacher data. That is, the learning unit 30 sets normal data as teacher data. The learning unit 30 prepares a plurality of data sets of reference behavior information and labels as teacher data. The learning unit 30 inputs a data set (teacher data) of reference behavior information and labels to the unlearned first AI model M, and when behavior information is input, information as to whether or not the behavior information is normal. machine learning of the first AI model M so as to be able to output (to be able to label the behavior information as to whether it is normal or not).
 図4は、第1AIモデルの種類の一例を示す模式図である。本実施形態では、学習部30は、図4に示すように、単位領域、運動状態レンジ、及び挙動情報を検出した挙動センサ10Bの組み合わせ毎に、第1AIモデルMを準備する。単位領域とは、車両10が走行する領域を所定の面積毎に区分した際の、区分された1つの領域を指す。また、運動状態レンジとは、車両の運動状態のレンジを指し、本実施形態では車両の速度範囲を指す。例えば、車速の想定される上限値と下限値との間を複数の速度範囲で区分して、区分されたそれぞれの速度範囲を、運動状態レンジとしてよい。 FIG. 4 is a schematic diagram showing an example of the types of the first AI model. In this embodiment, as shown in FIG. 4, the learning unit 30 prepares the first AI model M for each combination of the unit area, the motion state range, and the behavior sensor 10B that detected the behavior information. A unit area refers to one divided area when the area in which the vehicle 10 travels is divided into predetermined areas. Further, the motion state range refers to the range of the motion state of the vehicle, and in this embodiment refers to the speed range of the vehicle. For example, the range between the assumed upper limit value and lower limit value of the vehicle speed may be divided into a plurality of speed ranges, and each divided speed range may be set as the exercise state range.
 学習部30は、所定の挙動センサ10B、所定の単位領域、及び所定の運動状態レンジの組み合わせに対応する第1AIモデルMを機械学習させる際には、車両の位置がその所定の単位領域の範囲内に収まり、かつ、車速がその所定の運動状態レンジの範囲内に収まっている際の、その所定の挙動センサ10Bによって検出された参照挙動情報を、教師データとして用いて、その第1AIモデルMを学習させる。例えば、図4の例では、第1AIモデルM1aは、単位領域UR1、運動状態レンジC1、及び挙動センサ10B1(入力データが挙動情報D1)の組み合わせに対応するモデルといえる。この場合、学習部30は、参照挙動情報が検出された際の車両の位置が単位領域UR1内にあり、参照挙動情報が検出された際の車速が運動状態レンジC1の範囲内にあり、かつ、挙動センサ10B1によって検出された参照挙動情報を、抽出する。学習部30は、抽出した参照挙動情報を、第1AIモデルM1a用の教師データとして用いて、第1AIモデルM1aを機械学習させる。なお、図4では、単位領域UR1、運動状態レンジC2、及び挙動情報D1の組み合わせに対応する第1AIモデルM1bと、単位領域UR1、運動状態レンジC1、及び挙動情報D2の組み合わせに対応する第1AIモデルM2aと、単位領域UR1、運動状態レンジC2、及び挙動情報D2の組み合わせに対応する第1AIモデルM2bと、単位領域UR1、運動状態レンジC1、及び挙動情報D3の組み合わせに対応する第1AIモデルM3aと、単位領域UR1、運動状態レンジC2、及び挙動情報D3の組み合わせに対応する第1AIモデルM3bと、単位領域UR1、運動状態レンジC1、及び挙動情報D4の組み合わせに対応する第1AIモデルM4aと、単位領域UR1、運動状態レンジC2、及び挙動情報D4の組み合わせに対応する第1AIモデルM4bと、が例示されている。 When the learning unit 30 machine-learns the first AI model M corresponding to the combination of the predetermined behavior sensor 10B, the predetermined unit area, and the predetermined motion state range, the vehicle position is within the predetermined unit area. The first AI model M to learn For example, in the example of FIG. 4, the first AI model M1a can be said to be a model corresponding to a combination of the unit area UR1, the exercise state range C1, and the behavior sensor 10B1 (input data is the behavior information D1). In this case, the learning unit 30 determines that the position of the vehicle when the reference behavior information is detected is within the unit region UR1, the vehicle speed is within the motion state range C1 when the reference behavior information is detected, and , the reference behavior information detected by the behavior sensor 10B1 is extracted. The learning unit 30 machine-learns the first AI model M1a using the extracted reference behavior information as teacher data for the first AI model M1a. Note that in FIG. 4, a first AI model M1b corresponding to the combination of the unit region UR1, the motion state range C2, and the behavior information D1, and a first AI model M1b corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D2 A first AI model M2b corresponding to the combination of the model M2a, the unit region UR1, the motion state range C2, and the behavior information D2, and a first AI model M3a corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D3. , a first AI model M3b corresponding to the combination of the unit region UR1, the motion state range C2, and the behavior information D3, and a first AI model M4a corresponding to the combination of the unit region UR1, the motion state range C1, and the behavior information D4; A first AI model M4b corresponding to the combination of the unit area UR1, the motion state range C2, and the behavior information D4 is illustrated.
 (第2AIモデルの学習)
 学習部30は、未学習の第2AIモデルNに、車両の挙動と路面状態との対応関係を機械学習させることで、車両の挙動と路面状態との対応関係を機械学習した、学習済みの第2AIモデルNを生成する。学習部30は、1つの挙動センサ10Bによる車両の挙動の検出結果と路面状態との対応関係を機械学習させてもよいし、複数の挙動センサ10Bによる車両の挙動の検出結果と路面状態との対応関係を機械学習させてもよい。第2AIモデルNは、挙動情報が入力されたら路面状態を出力可能な任意のモデルであってよいが、例えばCNNモデルであってもよい。
(Learning of the second AI model)
The learning unit 30 causes the unlearned second AI model N to machine-learn the correspondence relationship between the behavior of the vehicle and the road surface condition, thereby learning the correspondence relationship between the behavior of the vehicle and the road surface condition. Generate 2AI model N. The learning unit 30 may perform machine learning of the correspondence relationship between the vehicle behavior detection results obtained by one behavior sensor 10B and the road surface conditions, or may perform machine learning of the vehicle behavior detection results obtained by a plurality of behavior sensors 10B and the road surface conditions. The correspondence may be machine-learned. The second AI model N may be any model capable of outputting road surface conditions when behavior information is input, but may be, for example, a CNN model.
 学習部30は、参照挙動情報と、その参照挙動情報が検出された位置における路面状態とのデータセットを、教師データとして設定する。参照挙動情報が検出された位置における路面状態は、予め測定された値を用いてよい。学習部30は、教師データとしての、参照挙動情報と路面状態とのデータセットを複数設定する。学習部30は、教師データとして設定した参照挙動情報と路面状態とのデータセットを、未学習の第2AIモデルNに入力して、挙動情報と路面状態との対応関係を、第2AIモデルNに機械学習させる。 The learning unit 30 sets a data set of the reference behavior information and the road surface condition at the position where the reference behavior information is detected as teacher data. A value measured in advance may be used for the road surface condition at the position where the reference behavior information is detected. The learning unit 30 sets a plurality of data sets of reference behavior information and road surface conditions as teacher data. The learning unit 30 inputs the data set of the reference behavior information and the road surface condition set as teacher data to the unlearned second AI model N, and transfers the correspondence relationship between the behavior information and the road surface condition to the second AI model N. machine learning.
 図5Aは、第2AIモデルの種類の一例を示す模式図である。本実施形態では、学習部30は、図5Aに示すように、入力データである挙動情報を検出した挙動センサ10Bの少なくとも一部が異なる、複数の第2AIモデルNを準備する。すなわち、それぞれの第2AIモデルNは、入力データとして入力される挙動情報の少なくとも一部が異なることとなる。図5Aでは、挙動センサ10B1、10B2、10B3、10B4が検出した挙動情報D1、D2、D3、D4が入力される第2AIモデルN1と、挙動センサ10B1、10B2、10B3が検出した挙動情報D1、D2、D3が入力される第2AIモデルN2と、挙動センサ10B1、10B2、10B4が検出した挙動情報D1、D2、D4が入力される第2AIモデルN3と、挙動センサ10B1、10B3、10B4が検出した挙動情報D1、D3、D4が入力される第2AIモデルN4と、挙動センサ10B2、10B3、10B4が検出した挙動情報D2、D3、D4が入力される第2AIモデルN5と、挙動センサ10B4が検出した挙動情報D4が入力される第2AIモデルNnと、が例示されている。 FIG. 5A is a schematic diagram showing an example of types of the second AI model. In the present embodiment, as shown in FIG. 5A, the learning unit 30 prepares a plurality of second AI models N in which at least some of the behavior sensors 10B that detect behavior information as input data are different. That is, each second AI model N differs in at least part of the behavior information input as input data. In FIG. 5A, a second AI model N1 to which behavior information D1, D2, D3, and D4 detected by the behavior sensors 10B1, 10B2, 10B3, and 10B4 are input, and behavior information D1 and D2 detected by the behavior sensors 10B1, 10B2, and 10B3. , D3 are input, the second AI model N3 is input with behavior information D1, D2, and D4 detected by the behavior sensors 10B1, 10B2, and 10B4, and the behavior detected by the behavior sensors 10B1, 10B3, and 10B4. A second AI model N4 to which information D1, D3, and D4 are input, a second AI model N5 to which behavior information D2, D3, and D4 detected by the behavior sensors 10B2, 10B3, and 10B4 are input, and the behavior detected by the behavior sensor 10B4. and a second AI model Nn to which information D4 is input.
 学習部30は、所定の挙動センサ10Bの挙動情報が入力される第2AIモデルNを機械学習させる際には、その所定の挙動センサ10Bが検出した参照挙動情報を教師データとして、その第2AIモデルNを機械学習させる。すなわち例えば、学習部30は、挙動センサ10B1、10B2、10B3、10B4が検出したそれぞれの参照挙動情報と、それらの参照挙動情報が検出された位置での路面状態とを、第2AIモデルN1の教師データとして、第2AIモデルN1を機械学習させる。 When the learning unit 30 machine-learns the second AI model N to which the behavior information of the predetermined behavior sensor 10B is input, the learning unit 30 uses the reference behavior information detected by the predetermined behavior sensor 10B as teacher data to generate the second AI model. Let N be machine-learned. In other words, for example, the learning unit 30 acquires the reference behavior information detected by the behavior sensors 10B1, 10B2, 10B3, and 10B4 and the road surface conditions at the positions where the reference behavior information is detected as a teacher of the second AI model N1. As data, the second AI model N1 is machine-learned.
 (挙動情報取得部)
 挙動情報取得部32は、挙動センサ10Bにより検出された挙動情報を取得する。より詳しくは、挙動情報取得部32は、挙動情報と、その挙動情報が検出された際の車両10の位置を示す位置情報とを、取得する。挙動情報取得部32は、通信部20を介して、測定データ取得装置12から、挙動情報及び位置情報を取得するが、車両10から直接取得してもよい。
(behavior information acquisition unit)
The behavior information acquisition unit 32 acquires behavior information detected by the behavior sensor 10B. More specifically, the behavior information acquisition unit 32 acquires behavior information and position information indicating the position of the vehicle 10 when the behavior information is detected. The behavior information acquisition unit 32 acquires behavior information and position information from the measurement data acquisition device 12 via the communication unit 20 , but may also acquire the behavior information and the position information directly from the vehicle 10 .
 (第1判断部)
 第1判断部34は、所定位置で検出された挙動情報と、その挙動情報が検出された所定位置に重なる第1領域で検出された参照挙動情報(その所定位置で検出される挙動情報よりも前に第1領域で検出された第1参照挙動情報)とに基づき、所定位置で検出された挙動情報が、第1領域で検出された参照挙動情報に対応するかを、すなわち挙動情報が正常であるかを、判断する。ここでの所定位置は、車両10が道路を走行した際の経路上の任意の地点(位置)であってよい。すなわち、第1判断部34は、検出された位置が異なる複数の挙動情報から、任意の位置で検出された挙動情報を、所定位置で検出された挙動情報として抽出してよい。第1領域とは、挙動情報が検出された際の車両10の位置(所定位置)に重なる所定面積の領域であり、例えば車両10の位置を中心とした所定半径の範囲を占める領域であってよい。本実施形態では、車両10が走行する領域を区分したそれぞれの単位領域のうちで、挙動情報が検出された際の車両10の位置と重なる単位領域を、第1領域としてよい。すなわち、第1判断部34は、挙動情報取得部32が取得した位置情報(挙動情報が検出された際の車両10の位置情報)が示す車両10の位置と重なる単位領域を、第1領域として抽出し、第1領域で検出された参照挙動情報と挙動情報とに基づき、挙動情報が正常であるかを判断する。
(First judgment unit)
The first determination unit 34 determines behavior information detected at a predetermined position and reference behavior information detected in a first region overlapping the predetermined position where the behavior information is detected (more than the behavior information detected at the predetermined position). First reference behavior information previously detected in the first area), whether the behavior information detected at the predetermined position corresponds to the reference behavior information detected in the first area, that is, whether the behavior information is normal determine whether it is The predetermined position here may be any point (position) on the route when the vehicle 10 travels on the road. That is, the first determination unit 34 may extract behavior information detected at an arbitrary position as behavior information detected at a predetermined position from a plurality of pieces of behavior information detected at different positions. The first area is an area of a predetermined area that overlaps the position (predetermined position) of the vehicle 10 when the behavior information is detected. good. In this embodiment, among the unit areas obtained by dividing the area in which the vehicle 10 travels, the unit area overlapping the position of the vehicle 10 when the behavior information is detected may be the first area. That is, the first determination unit 34 selects a unit area overlapping the position of the vehicle 10 indicated by the position information acquired by the behavior information acquisition unit 32 (the position information of the vehicle 10 when the behavior information is detected) as the first area. Based on the extracted reference behavior information and the behavior information detected in the first area, it is determined whether the behavior information is normal.
 本実施形態では、第1判断部34は、複数の第1AIモデルMのうちから、第1領域(挙動情報が検出された際の車両10の位置と重なる単位領域)に対応する学習済みの第1AIモデルMを読み出して、読み出した第1AIモデルMに、所定位置で検出された挙動情報を入力する。第1判断部34は、その第1AIモデルMから出力された、その挙動情報が正常かを示す情報を取得して、取得した情報に基づき、所定位置で検出された挙動情報が正常であるかを判断する。例えば、第1判断部34は、挙動情報が正常である旨のラベルが第1AIモデルM1aから出力された場合に、挙動情報が正常であると判定し、挙動情報が正常でない旨のラベルが第1AIモデルM1aから出力された場合に、挙動情報が正常でない(異常である)と判定してよい。また、挙動情報が正常である確率が第1AIモデルM1aから出力されるケースでは、第1判断部34は、挙動情報が正常である確率が所定値以上である場合には、挙動情報が正常であると判定し、挙動情報が正常である確率が所定値未満である場合には、挙動情報が正常でないと判断してよい。 In the present embodiment, the first determination unit 34 selects a learned first AI model corresponding to the first area (a unit area that overlaps with the position of the vehicle 10 when the behavior information is detected) from among the plurality of first AI models M. 1 AI model M is read, and behavior information detected at a predetermined position is input to the read 1 AI model M. The first determination unit 34 acquires information indicating whether the behavior information output from the first AI model M is normal, and based on the acquired information, determines whether the behavior information detected at the predetermined position is normal. to judge. For example, when a label indicating that the behavior information is normal is output from the first AI model M1a, the first determination unit 34 determines that the behavior information is normal, and the label indicating that the behavior information is not normal is output to the first AI model M1a. When output from the 1AI model M1a, it may be determined that the behavior information is not normal (abnormal). Further, in a case where the probability that the behavior information is normal is output from the first AI model M1a, the first determination unit 34 determines that the behavior information is normal when the probability that the behavior information is normal is equal to or greater than a predetermined value. If it is determined that the behavior information is normal and the probability that the behavior information is normal is less than a predetermined value, it may be determined that the behavior information is not normal.
 より詳しくは、挙動情報取得部32は、挙動情報及び位置情報と共に、挙動情報が検出された際の車両10の運動状態も取得する。運動状態は、本実施形態では車両10の速度であり、車両10に搭載された速度センサによって検出される。第1判断部34は、挙動情報が検出された際の車両10の運動状態(車速)と重なる運動状態レンジ(車速範囲)を抽出する。そして、第1判断部34は、挙動情報を検出した挙動センサ10B、第1領域として抽出した単位領域、及び抽出した運動状態レンジの組み合わせに対応する第1AIモデルM1を読み出す。第1判断部34は、読み出した第1AIモデルMに挙動情報を入力して、挙動情報が正常であるかを判断する。すなわち例えば、挙動情報が挙動センサ10B1により検出され、挙動情報が検出された際の車両10の位置が単位領域UR1に重なり、挙動情報が検出された際の運動状態が運動状態レンジC1に重なる場合には、第1判断部34は、第1AIモデルM1a(図4参照)を読み出して、第1AIモデルM1aに挙動情報を入力して、挙動情報が正常であるかを判断する。このように第1AIモデルMを用いる場合でも、第1AIモデルMが参照挙動情報に基づいて学習されているため、参照挙動情報に基づいて挙動情報が正常であるかを判断しているといえる。 More specifically, the behavior information acquisition unit 32 also acquires the motion state of the vehicle 10 when the behavior information is detected, along with the behavior information and the position information. The motion state is the speed of the vehicle 10 in this embodiment, and is detected by a speed sensor mounted on the vehicle 10 . The first determination unit 34 extracts a motion state range (vehicle speed range) that overlaps with the motion state (vehicle speed) of the vehicle 10 when the behavior information is detected. Then, the first determination unit 34 reads out the first AI model M1 corresponding to the combination of the behavior sensor 10B that detected the behavior information, the unit area extracted as the first area, and the extracted motion state range. The first determination unit 34 inputs the behavior information to the read first AI model M and determines whether the behavior information is normal. That is, for example, when the behavior information is detected by the behavior sensor 10B1, the position of the vehicle 10 when the behavior information is detected overlaps the unit area UR1, and the motion state when the behavior information is detected overlaps the motion state range C1. 4, the first determination unit 34 reads the first AI model M1a (see FIG. 4), inputs the behavior information to the first AI model M1a, and determines whether the behavior information is normal. Even when the first AI model M is used in this way, since the first AI model M is learned based on the reference behavior information, it can be said that whether the behavior information is normal is determined based on the reference behavior information.
 ここで、挙動センサ10Bは複数設けられているので、挙動情報取得部32は、同じ位置でそれぞれの挙動センサ10Bによって検出された挙動情報を取得する。第1判断部34は、同じ所定位置で各挙動センサ10Bによって検出されたそれぞれの挙動情報について、正常であるかを判断する。すなわち例えば、第1判断部34は、所定位置で挙動センサ10B1が検出した挙動情報D1と、所定位置で挙動センサ10B2が検出した挙動情報D2と、所定位置で挙動センサ10B3が検出した挙動情報D3と、所定位置で挙動センサ10B4が検出した挙動情報D4とのそれぞれについて、正常であるかを判断する。 Here, since a plurality of behavior sensors 10B are provided, the behavior information acquisition unit 32 acquires behavior information detected by each behavior sensor 10B at the same position. The first judgment unit 34 judges whether or not each piece of behavior information detected by each behavior sensor 10B at the same predetermined position is normal. That is, for example, the first determination unit 34 determines the behavior information D1 detected by the behavior sensor 10B1 at a predetermined position, the behavior information D2 detected by the behavior sensor 10B2 at a predetermined position, and the behavior information D3 detected by the behavior sensor 10B3 at a predetermined position. , and the behavior information D4 detected by the behavior sensor 10B4 at a predetermined position, it is determined whether they are normal.
 なお、第1判断部34による挙動情報が正常であるかの判断は、第1AIモデルMを用いることに限られない。例えば、第1判断部34は、参照挙動情報に基づき、正常となる挙動情報の範囲(正常範囲)を設定し、正常範囲に基づき、挙動情報が正常であるか判断してよい。第1判断部34は、挙動情報取得部32によって取得された挙動情報が正常範囲内に収まる場合は、その挙動情報を正常と判断し、正常範囲に収まらない場合は、その挙動情報を正常でないと判断する。なお、正常範囲の設定方法は任意であるが、例えば正規分布の手法に基づき、複数の参照挙動情報から算出された3σの範囲を、正常範囲として設定してよい。また、正常範囲は、第1AIモデルMと同様に、単位領域、運動状態レンジ、及び挙動情報を検出した挙動センサ10Bの組み合わせ毎に設定してよい。この場合、第1判断部34は、挙動情報に対応する単位領域、運動状態レンジ、及び挙動センサ10Bの組み合わせについて設定された正常範囲を読み出して、挙動情報がその正常範囲に収まる場合に、挙動情報を正常と判断する。 It should be noted that the determination of whether the behavior information is normal by the first determination unit 34 is not limited to using the first AI model M. For example, the first determination unit 34 may set a range of behavior information to be normal (normal range) based on the reference behavior information, and determine whether the behavior information is normal based on the normal range. The first determination unit 34 determines that the behavior information is normal when the behavior information acquired by the behavior information acquisition unit 32 falls within the normal range, and determines that the behavior information is not normal when the behavior information does not fall within the normal range. I judge. The normal range may be set by any method. For example, a 3σ range calculated from a plurality of pieces of reference behavior information based on a normal distribution method may be set as the normal range. Also, the normal range may be set for each combination of the unit area, the exercise state range, and the behavior sensor 10B that detected the behavior information, as in the first AI model M. In this case, the first determination unit 34 reads the normal range set for the combination of the unit area corresponding to the behavior information, the exercise state range, and the behavior sensor 10B, and if the behavior information falls within the normal range, the behavior Judge the information as normal.
 (第2判断部)
 第2判断部36は、第1領域より広い第2領域内で検出された挙動情報と、第2領域内で検出された参照挙動情報(第2領域内で検出される挙動情報よりも前に、第2領域内で検出された第2参照挙動情報)とに基づき、第2領域内で検出された挙動情報と参照挙動情報とが対応するかを、すなわち第2領域内で検出された挙動情報が正常であるかを、判断する。第2判断部36は、第1判断部34によって所定位置で検出された挙動情報が正常でないと判断された場合に、第2領域内で検出された挙動情報が正常であるかを判断する。本実施形態では、第2判断部36は、所定位置においてそれぞれの挙動センサ10Bにより検出された全ての挙動情報が正常でないと判断された場合に、第2領域内で検出された挙動情報が正常であるかを判断する。すなわち本実施形態では、所定位置における一部の挙動センサ10Bによる挙動情報のみが正常でないと判断された場合には、第2領域内で検出された挙動情報が正常であるかを判断しなくてよい。
(Second judgment unit)
The second determination unit 36 determines behavior information detected in a second area that is wider than the first area and reference behavior information detected in the second area (before the behavior information detected in the second area). , second reference behavior information detected in the second region), and whether the behavior information detected in the second region corresponds to the reference behavior information, that is, the behavior detected in the second region Determine if the information is correct. The second determination unit 36 determines whether the behavior information detected in the second area is normal when the first determination unit 34 determines that the behavior information detected at the predetermined position is not normal. In the present embodiment, when it is determined that all the behavior information detected by each behavior sensor 10B at the predetermined position is not normal, the second determination unit 36 determines that the behavior information detected within the second area is normal. determine whether it is That is, in the present embodiment, when it is determined that only the behavior information by some of the behavior sensors 10B at the predetermined position is not normal, it is not necessary to determine whether the behavior information detected within the second area is normal. good.
 図5Bは、第1領域及び第2領域の一例を説明する模式図である。第2領域は、第1領域より広い領域であれば、車両10が走行する領域のうちの任意の位置の領域であってよい。本実施形態では、第2領域は、第1領域の全域を含み、かつ第1領域よりも広い領域であることが好ましく、例えば車両10が走行した領域全体(全区間)であってもよい。例えば、図5Bに示すように、車両10が走行する領域ARにおいて、所定位置Pa、Pb、Pc、Pdにおいて挙動情報が取得されたとする。この場合例えば、所定位置Pa、Pb、Pc、Pdに重なる第1領域(単位領域)が、それぞれ、第1領域UR1a、UR1b、UR1c、UR1dであるといえる。そしてこの場合、第2領域UR2は、それぞれの第1領域UR1a、UR1b、UR1c、UR1dよりも広く、かつ、第1領域UR1aの全域、第1領域UR1bの全域、第1領域UR1cの全域、及び第1領域UR1dの全域を含む領域であってよく、車両10が走行する領域ARの全体を含む領域であってよい。ただしこれに限られず、例えば、第2領域UR2は、第1領域UR1a及び第2領域UR1bの全域のみを含むものであってもよい。 FIG. 5B is a schematic diagram explaining an example of the first area and the second area. The second area may be an area at any position in the area where the vehicle 10 travels, as long as it is wider than the first area. In the present embodiment, the second area preferably includes the entire first area and is wider than the first area. For example, the second area may be the entire area (entire section) traveled by the vehicle 10 . For example, as shown in FIG. 5B, it is assumed that behavior information is obtained at predetermined positions Pa, Pb, Pc, and Pd in the area AR in which the vehicle 10 travels. In this case, for example, the first regions (unit regions) overlapping the predetermined positions Pa, Pb, Pc, and Pd can be said to be the first regions UR1a, UR1b, UR1c, and UR1d, respectively. In this case, the second region UR2 is wider than the first regions UR1a, UR1b, UR1c, and UR1d, and is the entire first region UR1a, the entire first region UR1b, the entire first region UR1c, and It may be an area that includes the entire first area UR1d, or an area that includes the entire area AR in which the vehicle 10 travels. However, the present invention is not limited to this, and for example, the second region UR2 may include only the entire regions of the first region UR1a and the second region UR1b.
 第2領域内においては、位置毎に複数の挙動情報が検出される。第2判断部36は、第2領域内において異なる位置で検出された挙動情報毎に、挙動情報が正常であるかを判断し、それぞれの挙動情報が正常であるかの判断結果に基づき、第2領域内で検出された挙動情報が正常であるかを判断する。第2判断部36は、位置毎の挙動情報が正常であるかを、第1判断部34による判断方法と同じ方法で判断してよい。すなわち例えば、第2判断部36は、挙動情報を検出した挙動センサ10B、挙動情報を検出した位置に重なる単位領域、挙動情報を検出した際の運動状態に重なる運動状態レンジの組み合わせに対応する第1AIモデルM1を読み出し、読み出した第1AIモデルMに挙動情報を入力して、挙動情報が正常かを示す情報を取得する。第2判断部36は、第2領域内において異なる位置で検出された挙動情報毎に、同様の方法で、挙動情報が正常かを示す情報を取得する。第2判断部36は、それぞれの挙動情報が正常かを示す情報に基づき、第2領域内で検出された挙動情報が正常であるかを判断する。例えば、第2判断部36は、第2領域内で検出された挙動情報のうち、第1AIモデルMに正常であるとされた挙動情報の割合が所定値以上である場合には、第2領域内で検出された挙動情報が正常であると判断し、第1AIモデルMに正常であるとされた挙動情報の割合が所定値未満である場合には、第2領域内で検出された挙動情報が正常でないと判断してよい。 Within the second area, multiple pieces of behavior information are detected for each position. The second determination unit 36 determines whether the behavior information is normal for each piece of behavior information detected at different positions in the second region, and determines whether the behavior information is normal based on the determination result. 2 It is determined whether the behavior information detected in the area is normal. The second determination unit 36 may determine whether the behavior information for each position is normal by the same method as the determination method by the first determination unit 34 . That is, for example, the second determination unit 36 determines the behavior sensor 10B that detected the behavior information, the unit area that overlaps the position where the behavior information was detected, and the motion state range that overlaps the motion state when the behavior information was detected. 1AI model M1 is read, behavior information is input to the read 1AI model M, and information indicating whether the behavior information is normal is acquired. The second determination unit 36 acquires information indicating whether the behavior information is normal by the same method for each piece of behavior information detected at different positions in the second area. The second determination unit 36 determines whether the behavior information detected within the second area is normal based on the information indicating whether each behavior information is normal. For example, the second determination unit 36 determines that the behavior information detected in the second region, if the ratio of the behavior information determined to be normal in the first AI model M is equal to or greater than a predetermined value, If the behavior information detected in the first AI model M is determined to be normal, and if the ratio of the behavior information determined to be normal in the first AI model M is less than a predetermined value, the behavior information detected in the second region may be judged to be abnormal.
 なお、第2判断部36は、第2領域内においてそれぞれの挙動センサ10Bにより検出された挙動情報が正常であるかを判断してもよいし、第2領域内において一部の挙動センサ10B(例えば挙動センサ10B1のみ)により検出された挙動情報のみについて、正常であるかを判断してもよい。また、本実施形態では、第2判断部36は、第2領域内で検出された複数の挙動情報と、第2領域内で検出された複数の参照挙動情報に基づき、第2領域内で検出された挙動情報が正常であるかを判断する。ただし、第2判断部36は、複数の挙動情報及び複数の参照挙動情報を用いることに限られず、例えば、第2領域内で検出された1つの挙動情報と、第2領域内で検出された1つの参照挙動情報に基づき、第2領域内で検出された挙動情報が正常であるかを判断してもよい。 The second determination unit 36 may determine whether the behavior information detected by each behavior sensor 10B in the second region is normal, or may determine whether some of the behavior sensors 10B ( For example, only the behavior information detected by the behavior sensor 10B1) may be judged to be normal. Further, in the present embodiment, the second determination unit 36 detects behavior within the second region based on a plurality of pieces of behavior information detected within the second region and a plurality of pieces of reference behavior information detected within the second region. Determines whether the received behavior information is normal. However, the second determination unit 36 is not limited to using a plurality of pieces of behavior information and a plurality of pieces of reference behavior information. Based on one piece of reference behavior information, it may be determined whether the behavior information detected in the second area is normal.
 ここで、第1判断部34によって正常であるかを判断された挙動情報が、所定位置における1地点のデータであるのに対し、第2判断部36によって正常であるかを判断する挙動情報は、第2領域内の複数の地点のデータである。すなわち、第2判断部36は、第1判断部34よりも、多くの数の挙動情報が正常であるかを判断しているといえる。ただし、第1判断部34によって正常であるかを判断される挙動情報は、1地点のデータに限られず、複数地点のデータであってもよいが、この場合でも、第2判断部36によって正常であるかを判断される挙動情報の数の方が多くなる。 Here, while the behavior information determined to be normal by the first determination unit 34 is data for one point at a predetermined position, the behavior information determined to be normal by the second determination unit 36 is , data of a plurality of points in the second region. That is, it can be said that the second determination unit 36 determines whether a larger number of pieces of behavior information are normal than the first determination unit 34 does. However, the behavior information that is judged to be normal by the first judging unit 34 is not limited to data of one point, and may be data of a plurality of points. The number of pieces of behavior information for which it is determined whether or not is larger.
 第2判断部36による挙動情報が正常であるかの判断は、第1AIモデルMを用いることに限られない。第2判断部36は、第1判断部34における説明と同様に、正常範囲を設定し、正常範囲に基づき、挙動情報が正常であるか判断してよい。また、第2判断部36は、第2領域での挙動情報が正常であるかを判断可能な、第1AIモデルMとは別のAIモデルを用いて、挙動情報が正常であるか判断してよい。 The determination of whether the behavior information is normal by the second determination unit 36 is not limited to using the first AI model M. The second determination unit 36 may set a normal range and determine whether the behavior information is normal based on the normal range, as described for the first determination unit 34 . Further, the second determination unit 36 uses an AI model different from the first AI model M, which can determine whether the behavior information in the second area is normal, and determines whether the behavior information is normal. good.
 所定位置で検出される挙動情報と、第1領域及び第2領域で検出された参照挙動情報との例について、図5Bを用いて説明する。図5Bにおいては、所定位置Pa、Pb、Pc、Pdにおいて挙動情報が取得されており、第1領域(単位領域)UR1a、UR1b、UR1c、UR1dが、所定位置Pa、Pb、Pc、Pdに重なる第1領域である。この場合、第1判断部34は、第1領域UR1a内の位置Pa’で過去に検出された挙動情報を、第1領域UR1aで検出された参照挙動情報として扱って、所定位置Paで検出される挙動情報と、位置Pa’で検出された参照挙動情報とが対応するかを判断する。同様に、第1判断部34は、所定位置Pb、Pc、Pdのそれぞれで検出される挙動情報と、位置Pb’、Pc’、Pd’のそれぞれで検出された参照挙動情報とが対応するかを判断する。なお、例えば1つの第1領域で複数の参照挙動情報が検出された場合には、それらの参照挙動情報の平均値を、第1領域の参照挙動情報として扱ってよい。すなわち例えば、第1判断部34は、所定位置Paで検出される挙動情報と、第1領域UR1a内で検出された複数の参照挙動情報の平均値とが、対応するかを判断してもよい。 An example of behavior information detected at a predetermined position and reference behavior information detected in the first area and the second area will be described with reference to FIG. 5B. In FIG. 5B, behavior information is acquired at predetermined positions Pa, Pb, Pc, and Pd, and first regions (unit regions) UR1a, UR1b, UR1c, and UR1d overlap predetermined positions Pa, Pb, Pc, and Pd. This is the first area. In this case, the first determination unit 34 treats the behavior information detected in the past at the position Pa′ in the first region UR1a as the reference behavior information detected in the first region UR1a, and treats the behavior information detected at the predetermined position Pa as It is determined whether the behavior information detected at the position Pa' corresponds to the reference behavior information detected at the position Pa'. Similarly, the first determination unit 34 determines whether the behavior information detected at each of the predetermined positions Pb, Pc, and Pd corresponds to the reference behavior information detected at each of the positions Pb', Pc', and Pd'. to judge. For example, when a plurality of pieces of reference behavior information are detected in one first area, the average value of the pieces of reference behavior information may be treated as the reference behavior information of the first area. That is, for example, the first determination unit 34 may determine whether the behavior information detected at the predetermined position Pa and the average value of the plurality of reference behavior information detected within the first region UR1a correspond to each other. .
 また、図5Bの例では、第2判断部34は、位置Pa’、Pb’、Pc’、Pd’のそれぞれで過去に検出された挙動情報を、第2領域UR2で検出された参照挙動情報として扱って、所定位置Pa、Pb、Pc、Pdで検出される挙動情報と、位置Pa’、Pb’、Pc’、Pd’で検出された参照挙動情報とが対応するかを判断する。このように、複数の挙動情報と、第2領域UR2内で検出された複数の参照挙動情報とを比較することで、広い領域で挙動情報が正常であるかを、適切に判断できる。また上述のように、1つの挙動情報と、第2領域UR2内で検出された1つの参照挙動情報とを比較してもよい。この場合例えば、第2判断部34は、例えば位置Pa’と位置Pb’など、第2領域UR2内の複数の位置で検出された参照挙動情報の平均値を、第2領域UR2内で検出された1つの参照挙動情報として扱って、所定位置Paで検出される挙動情報と、参照挙動情報の平均値とが対応するかを判断してよい。このように、1つの挙動情報と1つの参照挙動情報とを比較することで、広い領域を考慮しつつ、位置Paに近い領域において、挙動情報が正常であるかを、適切に判断できる。 In addition, in the example of FIG. 5B, the second determination unit 34 combines the behavior information detected in the past at each of the positions Pa', Pb', Pc', and Pd' with the reference behavior information detected in the second region UR2. , and it is determined whether the behavior information detected at the predetermined positions Pa, Pb, Pc, and Pd correspond to the reference behavior information detected at the positions Pa', Pb', Pc', and Pd'. In this way, by comparing a plurality of pieces of behavior information with a plurality of pieces of reference behavior information detected within the second region UR2, it is possible to appropriately determine whether the behavior information is normal over a wide region. Also, as described above, one piece of behavior information may be compared with one piece of reference behavior information detected within the second region UR2. In this case, for example, the second determination unit 34 calculates the average value of the reference behavior information detected at a plurality of positions within the second region UR2, such as the position Pa' and the position Pb'. It may be determined whether the behavior information detected at the predetermined position Pa and the average value of the reference behavior information correspond to each other by treating the reference behavior information as one piece of reference behavior information. In this way, by comparing one piece of behavior information and one piece of reference behavior information, it is possible to appropriately determine whether the behavior information is normal in an area close to the position Pa while considering a wide area.
 (演算部)
 演算部38は、機械学習済みの第2AIモデルNに挙動情報を入力することで、道路の路面状態を算出する。演算部38は、第2AIモデルNに入力データとして挙動情報を入力することで、第2AIモデルNからの出力データとして、その挙動情報が検出された位置における道路の路面状態の推定結果を取得する。演算部38は、第2AIモデルNから取得した路面状態の推定結果に基づき、路面状態を算出する。演算部38は、第1判断部34及び第2判断部36の判断結果に応じて、異なる方法で路面状態を算出する。以下、それぞれのケースについて説明する。
(Calculation part)
The calculation unit 38 calculates the road surface condition by inputting the behavior information to the second AI model N that has undergone machine learning. By inputting the behavior information as input data to the second AI model N, the calculation unit 38 obtains, as output data from the second AI model N, an estimation result of the road surface condition at the position where the behavior information is detected. . The calculation unit 38 calculates the road surface condition based on the estimation result of the road surface condition acquired from the second AI model N. The calculation unit 38 calculates the road surface state by different methods according to the determination results of the first determination unit 34 and the second determination unit 36 . Each case will be described below.
 (所定位置で検出された挙動情報の全てが正常であると判断された場合)
 第1判断部34によって、所定位置で各挙動センサ10Bによって検出された挙動情報の全てが正常であると判断された場合には、演算部38は、既存の第2AIモデルNを用いて、既存の第2AIモデルNに挙動情報を入力することで、道路の路面状態を算出する。すなわちこの場合、今回検出された挙動情報を用いて第2AIモデルNを更新することなく、それより前に検出された参照挙動情報に基づき生成された既存の第2AIモデルNを用いて、道路の路面状態を算出する。このように、所定位置で検出された挙動情報の全てが正常であると判断された場合には、車両10や挙動センサ10Bの特性が変化していないとして、第2AIモデルNが更新されることなく、既存の学習済みの第2AIモデルNを用いて、路面状態が推定される。
(When it is determined that all behavior information detected at a predetermined position is normal)
When the first determination unit 34 determines that all of the behavior information detected by each behavior sensor 10B at the predetermined position is normal, the calculation unit 38 uses the existing second AI model N to determine the existing By inputting the behavior information to the second AI model N of , the road surface condition is calculated. That is, in this case, without updating the second AI model N using the behavior information detected this time, the existing second AI model N generated based on the previously detected reference behavior information is used to change the road. Calculate the road surface condition. In this way, when it is determined that all of the behavior information detected at the predetermined position is normal, the characteristics of the vehicle 10 and the behavior sensor 10B are not changed, and the second AI model N is updated. Instead, the existing learned second AI model N is used to estimate the road surface condition.
 本実施形態では、入力される挙動情報が異なる複数の第2AIモデルNが準備されている。演算部38は、それぞれの第2AIモデルNに挙動情報を入力して、それぞれの第2AIモデルNから出力される路面状態の推定結果を取得する。演算部38は、それぞれの第2AIモデルNから出力される路面状態の推定結果に基づき、道路の路面状態を算出する。すなわち、演算部38は、アンサンブル学習を行っているといえる。図5Aの例においては、演算部38は、第2AIモデルN1に、挙動情報D1、D2、D3、D4を入力して、出力データである路面状態の推定値O1を取得する。同様に、演算部38は、第2AIモデルN2と第2AIモデルN3と第2AIモデルN4と第2AIモデルN5と第2AIモデルNnとに、それぞれ、挙動情報D1、D2、D3と、挙動情報D1、D2、D4と、挙動情報D1、D3、D4と、挙動情報D2、D3、D4と、挙動情報D4とを入力して、出力データである路面状態の推定値O2、O3、O4、O5、Onを取得する。そして、演算部38は、それぞれの推定値O1~Onに基づき、路面状態を算出する。例えば、演算部38は、以下の式(1)に基づき、路面状態の最終的な推定結果である推定値Rを算出してよい。 In this embodiment, a plurality of second AI models N with different input behavior information are prepared. The calculation unit 38 inputs the behavior information to each of the second AI models N, and acquires the estimation result of the road surface state output from each of the second AI models N. The calculation unit 38 calculates the road surface condition based on the estimation result of the road surface condition output from each of the second AI models N. As shown in FIG. That is, it can be said that the calculation unit 38 is performing ensemble learning. In the example of FIG. 5A, the calculation unit 38 inputs the behavior information D1, D2, D3, and D4 to the second AI model N1, and acquires the estimated value O1 of the road surface state, which is output data. Similarly, the computing unit 38 adds behavior information D1, D2, D3, behavior information D1, D2, D4, behavior information D1, D3, D4, behavior information D2, D3, D4, and behavior information D4 are input, and estimated road surface state values O2, O3, O4, O5, and On are output data. to get Then, the calculation unit 38 calculates the road surface condition based on each of the estimated values O1 to On. For example, the calculation unit 38 may calculate an estimated value R, which is the final estimation result of the road surface state, based on the following equation (1).
 R=(r1・O1+r2・O2+r3・O3+r4・O4+r5・O5+・・・rn・On)/n ・・・(1)  R=(r1・O1+r2・O2+r3・O3+r4・O4+r5・O5+...rn・On)/n...(1)
 ここで、r1、r2、r3、r4、r5、・・・rnは、推定値O1、O2、O3、O4、O5・・・Onのそれぞれに対する重み付けを行うための重み係数であり、nは第2AIモデルNの数である。重み係数r1・・・rnは、任意に設定されてよい。このように、本実施形態では、それぞれの第2AIモデルNによる推定値に重み係数を乗じた値の平均値を、路面状態の推定値として算出しているが、それに限られず、それぞれの第2AIモデルNによる推定値を用いた任意の方法で、路面状態の推定値を算出してよい。以下、第2AIモデルN2毎の推定値O1~Onを区別しない場合には、推定値Oと記載し、第2AIモデルN2毎の重み係数r1~rnを区別しない場合には、重み係数rと記載する。 Here, r1, r2, r3, r4, r5, . 2 is the number of AI models N; The weighting coefficients r1 . . . rn may be set arbitrarily. As described above, in the present embodiment, the average value of the values obtained by multiplying the estimated values by each second AI model N by the weighting factor is calculated as the estimated value of the road surface condition. Any method using the model N estimated value may be used to calculate the estimated value of the road surface condition. Hereinafter, when the estimated values O1 to On for each of the second AI model N2 are not distinguished, they are referred to as an estimated value O, and when the weighting coefficients r1 to rn for each of the second AI model N2 are not distinguished, they are referred to as a weighting coefficient r. do.
 (所定位置で検出された挙動情報の一部のみが正常でないと判断された場合)
 第1判断部34によって、所定位置で検出された挙動情報のうちの一部のみが正常でないと判断された場合には、演算部38は、正常でないと判断された挙動センサ10B(異常センサ)が検出した挙動情報による路面状態への影響度合いが、正常であると判断された挙動センサ10Bが検出した挙動情報による路面状態への影響度合いよりも低くなるように、既存の第2AIモデルNを用いつつ、路面状態を算出する。例えば、挙動センサ10B1が検出した挙動情報が正常でないと判断され、挙動センサ10B2、10B3、10B4が検出した挙動情報が正常であると判断された場合、演算部38は、挙動センサ10B1の挙動情報D1による路面状態の推定値Rへの影響度合いが、挙動センサ10B2、10B3、10B4の挙動情報D2、D3、D4による路面状態の推定値Oへの影響度合いよりも低くなるように、路面状態の推定値Rを算出する。このように、一部の挙動情報のみが異常であると判断された場合には、その挙動センサ10Bの特性が変化したとして、その挙動センサ10Bの挙動情報の影響度合いを低くして、路面状態が推定される。これにより、特性が変化した挙動センサ10Bからの影響度を低くして路面状態が推定されるため、路面状態の推定精度の低下を抑制できる。なお、一部の挙動情報のみが異常であると判断された場合にも、全ての挙動情報が正常と判断された場合と同様に、今回検出された挙動情報を用いて第2AIモデルNを更新することなく、それより前に検出された参照挙動情報に基づき生成された既存の第2AIモデルNが用いられる。
(When it is determined that only part of the behavior information detected at a predetermined position is not normal)
When the first determination unit 34 determines that only a part of the behavior information detected at the predetermined position is not normal, the calculation unit 38 selects the behavior sensor 10B (abnormal sensor) determined to be abnormal. The existing second AI model N is changed so that the degree of influence on the road surface state by the behavior information detected by is lower than the degree of influence on the road surface state by the behavior information detected by the behavior sensor 10B determined to be normal. The road surface condition is calculated while using For example, when it is determined that the behavior information detected by the behavior sensor 10B1 is not normal and the behavior information detected by the behavior sensors 10B2, 10B3, and 10B4 is determined to be normal, the calculation unit 38 detects the behavior information of the behavior sensor 10B1. The degree of influence of D1 on the estimated value R of the road surface condition is lower than the degree of influence of the estimated value O of the road surface condition by the behavior information D2, D3, and D4 of the behavior sensors 10B2, 10B3, and 10B4. An estimated value R is calculated. In this way, when it is determined that only a part of the behavior information is abnormal, the degree of influence of the behavior information of the behavior sensor 10B is reduced on the assumption that the characteristics of the behavior sensor 10B have changed, and the road surface condition is determined. is estimated. As a result, the road surface state is estimated with a reduced degree of influence from the behavior sensor 10B whose characteristics have changed, so that it is possible to suppress deterioration in the estimation accuracy of the road surface state. Even if only a part of the behavior information is determined to be abnormal, the second AI model N is updated using the behavior information detected this time in the same way as when all the behavior information is determined to be normal. Instead, an existing second AI model N generated based on previously detected reference behavior information is used.
 ここで、正常でないと判断された挙動センサ10Bの挙動情報が入力データとされる第2AIモデルN(第1モデル)からの出力データ(推定値O)を、第1推定結果とし、正常でないと判断された挙動センサ10Bの挙動情報が入力データとされない第2AIモデルN(第2モデル)からの出力データ(推定値O)を、第2推定結果とする。この場合、演算部38は、正常でないと判断された挙動センサ10Bの挙動情報を第1モデルに入力することで取得した第1推定結果と、正常でないと判断された挙動センサ10Bに検出された挙動情報を入力することで取得した第2推定結果と、に基づき、第1推定結果による路面状態の推定値Rへの影響度合いが、第2推定結果による路面状態の推定値Rへの影響度合いよりも低くなるように、路面状態の推定値Rを算出する。 Here, the output data (estimated value O) from the second AI model N (first model) whose input data is the behavior information of the behavior sensor 10B determined to be abnormal is set as the first estimation result, and if it is not normal, The output data (estimated value O) from the second AI model N (second model) whose input data is not the determined behavior information of the behavior sensor 10B is set as the second estimation result. In this case, the calculation unit 38 obtains the first estimation result by inputting the behavior information of the behavior sensor 10B determined to be abnormal to the first model, and the behavior detected by the behavior sensor 10B determined to be abnormal. Based on the second estimation result obtained by inputting the behavior information, the degree of influence of the first estimation result on the estimated value R of the road surface condition is the degree of influence of the second estimation result on the estimated value R of the road surface condition. An estimated value R of the road surface condition is calculated so as to be lower than .
 具体的には、演算部38は、正常でない挙動センサ10Bの挙動情報が入力データとされる第2AIモデルNの推定値O(第1推定結果)に対する重み係数rを、正常でない挙動センサ10Bの挙動情報が入力データとされない第2AIモデルNの推定値O(第2推定結果)に対する重み係数rよりも、小さな値とする。すなわち例えば、挙動センサ10B1のみが正常でないと判断された場合には、式(1)の重み係数r1、r2、r3、r4を、重み係数r5、rnよりも、小さくする。これにより、挙動センサ10B1の挙動情報が入力される第2AIモデルN1~N4の推定値O1~O4による推定値Rへの影響度合いが、挙動センサ10B1の挙動情報が入力されない第2AIモデルN5、Nnの推定値O5、Onによる推定値Rへの影響度合いよりも、低くなる。 Specifically, the calculation unit 38 calculates the weighting factor r for the estimated value O (first estimation result) of the second AI model N whose input data is the behavior information of the abnormal behavior sensor 10B. A smaller value than the weighting factor r for the estimated value O (second estimation result) of the second AI model N whose behavior information is not input data. That is, for example, when it is determined that only the behavior sensor 10B1 is not normal, the weighting factors r1, r2, r3, and r4 in Equation (1) are made smaller than the weighting factors r5 and rn. As a result, the degree of influence on the estimated value R by the estimated values O1 to O4 of the second AI models N1 to N4 to which the behavior information of the behavior sensor 10B1 is input is determined by the second AI models N5 and Nn to which the behavior information of the behavior sensor 10B1 is not input. is lower than the degree of influence on the estimated value R due to the estimated value O5, On.
 なお、正常でないと判断された挙動センサ10Bの挙動情報の影響度合いを低くする方法は、重み係数rを小さくすることに限られず、任意の方法を用いてよい。例えば、演算部38は、挙動情報を第2AIモデルNに入力する際に、正常でないと判断された挙動センサ10Bの挙動情報を補正することで、その挙動情報の影響度合いを低くしてもよい。この場合例えば、演算部38は、正常でないと判断された挙動センサ10Bの挙動情報を無限大の値として、第2AIモデルNに入力してよい。これにより、正常でないと判断された挙動センサ10Bの挙動情報が入力された第2AIモデルNの演算がエラーとなり、その第2AIモデルNの推定値Oが無効となるため、影響度合いを低くすることが可能となる。また例えば、正常でない挙動センサ10Bの挙動情報が入力データとされる第2AIモデルNを演算に用いずに、正常でない挙動センサ10Bの挙動情報が入力データとされない第2AIモデルNのみを用いて、推定値Rを算出してもよい。 The method of reducing the degree of influence of the behavior information of the behavior sensor 10B determined to be abnormal is not limited to reducing the weighting factor r, and any method may be used. For example, when the behavior information is input to the second AI model N, the calculation unit 38 may correct the behavior information of the behavior sensor 10B determined to be abnormal, thereby reducing the degree of influence of the behavior information. . In this case, for example, the calculation unit 38 may input the behavior information of the behavior sensor 10B determined to be abnormal to the second AI model N as an infinite value. As a result, the calculation of the second AI model N to which the behavior information of the behavior sensor 10B determined to be abnormal is input becomes an error, and the estimated value O of the second AI model N becomes invalid, so the degree of influence should be reduced. becomes possible. Alternatively, for example, without using the second AI model N whose input data is the behavior information of the abnormal behavior sensor 10B, only the second AI model N whose input data is the behavior information of the abnormal behavior sensor 10B is used, An estimated value R may be calculated.
 (第2領域内で検出された挙動情報が正常と判断された場合)
 上述のように、第2判断部36は、第1判断部34によって所定位置で検出された挙動情報が正常でないと判断された場合に、第2領域内で検出された挙動情報が正常であるかを判断する。第2判断部36によって、第2領域内で検出された挙動情報が正常と判断された場合、演算部38は、既存の第2AIモデルNに挙動情報を入力することで、道路の路面状態を算出する。すなわちこの場合、所定位置で検出された挙動情報の全てが正常であると判断された場合と同様に、今回検出された挙動情報を用いて第2AIモデルNを更新することなく、それより前に検出された参照挙動情報に基づき生成された既存の第2AIモデルNを用いて、道路の路面状態を算出する。このように、所定位置での挙動情報が異常であるが、広域の第2領域で検出された挙動情報が正常であると判断された場合には、車両10や挙動センサ10Bの特性が変化しているわけではなく、所定位置の地点において路面状態が変化していると判断して、第2AIモデルNを更新することなく、既存の学習済みの第2AIモデルNを用いて、路面状態が推定される。
(When behavior information detected within the second area is determined to be normal)
As described above, when the first determination unit 34 determines that the behavior information detected at the predetermined position is not normal, the second determination unit 36 determines that the behavior information detected within the second area is normal. to judge whether When the behavior information detected in the second region is determined to be normal by the second determination unit 36, the calculation unit 38 inputs the behavior information to the existing second AI model N to determine the road surface condition. calculate. That is, in this case, similarly to the case where it is determined that all the behavior information detected at the predetermined position is normal, the behavior information detected this time is not used to update the second AI model N, and the second AI model N is updated before that. The road surface condition is calculated using the existing second AI model N generated based on the detected reference behavior information. As described above, when it is determined that the behavior information detected in the wide second area is normal although the behavior information at the predetermined position is abnormal, the characteristics of the vehicle 10 and the behavior sensor 10B change. The road surface condition is estimated by using the existing learned second AI model N without updating the second AI model N by judging that the road surface condition is changing at the point of the predetermined position. be done.
 (第2領域内で検出された挙動情報が正常でないと判断された場合)
 第2判断部36によって、第2領域内で検出された挙動情報が正常でないと判断された場合には、学習部30が、第2AIモデルNを更新する。具体的には、学習部30は、挙動情報を教師データとして第2AIモデルNに機械学習させて、路面状態の推定に用いるモデルを、第2領域内で検出された挙動情報を教師データとした第2AIモデルNに更新する。第2AIモデルNの更新に用いられる挙動情報は、第2判断部36によって正常でないと判断された挙動情報が検出された際に取得された、一群の挙動情報(すなわち今回の車両10の走行で検出された一群の挙動情報)であってよく、更新前の第2AIモデルNの学習に用いた参照挙動情報よりも後に取得された、第2領域内で検出された挙動情報ともいえる。
(When it is determined that the behavior information detected within the second area is not normal)
The learning unit 30 updates the second AI model N when the second determination unit 36 determines that the behavior information detected in the second area is not normal. Specifically, the learning unit 30 performs machine learning on the second AI model N using the behavior information as teacher data, and uses the behavior information detected in the second region as teacher data for the model used for estimating the road surface condition. Update to the second AI model N. The behavior information used to update the second AI model N is a group of behavior information acquired when behavior information determined to be abnormal by the second determination unit 36 is detected (i.e., during the current run of the vehicle 10). A group of detected behavioral information), and can be said to be behavioral information detected in the second region acquired after reference behavioral information used for learning of the second AI model N before updating.
 より詳しくは、学習部30は、挙動情報と路面状態とのデータセットを教師データとして、未学習の第2AIモデルNに入力することで、第2AIモデルNに機械学習させて、新たな学習済みの第2AIモデルNを生成する。すなわち、学習部30は、参照挙動情報を用いて機械学習させた学習済みの第2AIモデルNを、挙動情報を用いて再学習させるわけではなく、参照挙動情報を除外して挙動情報のみを教師データとする新たな第2AIモデルNを生成するといえる。なお、新たな第2AIモデルNの教師データとして用いる路面状態は、新たに測定した値ではなく、更新前の第2AIモデルNの教師データとして用いた測定済みの路面状態を用いてよい。ただし、測定済みの路面状態のうち、データが古いなどの理由で使用できないと判断されるものは、教師データとして用いなくてもよい。 More specifically, the learning unit 30 causes the second AI model N to perform machine learning by inputting a data set of behavior information and road surface conditions as teacher data into the unlearned second AI model N, thereby creating a new learned model. generate a second AI model N of That is, the learning unit 30 does not re-learn the learned second AI model N, which has been machine-learned using the reference behavior information, using the behavior information. It can be said that a new second AI model N is generated as data. The road surface condition used as teaching data for the new second AI model N may be the measured road surface condition used as teaching data for the second AI model N before updating, instead of the newly measured values. However, among the measured road surface conditions, road surface conditions that are judged to be unusable due to reasons such as old data need not be used as teacher data.
 演算部38は、挙動情報を用いて更新された新たな第2AIモデルNを用いて、路面状態の推定を行う。なおこの場合、今回検出した挙動情報は、新たな第2AIモデルNの生成に用いたため、路面状態の推定には用いない。すなわち、演算部38は、第2AIモデルNを更新した後に検出された新たな挙動情報を、更新した第2AIモデルNに入力することで、路面状態の推定を行う。 The calculation unit 38 uses the new second AI model N updated using the behavior information to estimate the road surface condition. In this case, the behavior information detected this time is used to generate a new second AI model N, so it is not used to estimate the road surface state. That is, the calculation unit 38 inputs new behavior information detected after updating the second AI model N to the updated second AI model N, thereby estimating the road surface state.
 このように、第2領域内で検出された挙動情報が正常でない場合には、個別の挙動センサ10Bではなく車両10や挙動センサ10Bの全体の特性が変化したため、既存の第2AIモデルNを用いることはできないと判断し、最新の挙動情報を用いて第2AIモデルNを更新する。従って、車両10の特性の変化に合わせた第2AIモデルNを用いることが可能となり、路面状態の推定精度の低下を抑制できる。 In this way, when the behavior information detected in the second area is not normal, the existing second AI model N is used because the characteristics of the entire vehicle 10 or the behavior sensor 10B have changed instead of the individual behavior sensor 10B. Therefore, the second AI model N is updated using the latest behavior information. Therefore, it becomes possible to use the second AI model N adapted to changes in the characteristics of the vehicle 10, and it is possible to suppress deterioration in the estimation accuracy of the road surface state.
 以上のように、本実施形態においては、所定位置で検出された挙動情報の一部が異常である場合には、異常な挙動センサ10Bの影響度合いを下げる処理を行い、所定位置で検出された全ての挙動情報が異常であり、かつ、第2領域内で検出された挙動情報が正常でない場合には、第2AIモデルNを更新する処理を行う。ただし、これらの全ての処理を行うことに限られず、これらの処理のうち、少なくとも一部の処理を行うものであってもよい。すなわち例えば、所定位置で検出された挙動情報が正常でなく、かつ第2領域内で検出された挙動情報が正常でない場合における、第2AIモデルNを更新する処理のみを行ってもよい。また逆に、所定位置で検出された挙動情報の一部が異常である場合における、異常な挙動センサ10Bの影響度合いを下げる処理のみを行ってもよい。 As described above, in the present embodiment, when a part of the behavior information detected at a predetermined position is abnormal, the processing to reduce the degree of influence of the abnormal behavior sensor 10B is performed, and the behavior information detected at the predetermined position is If all of the behavior information is abnormal and the behavior information detected within the second area is not normal, a process of updating the second AI model N is performed. However, it is not limited to performing all of these processes, and at least some of these processes may be performed. That is, for example, only the process of updating the second AI model N may be performed when the behavior information detected at the predetermined position is not normal and the behavior information detected within the second area is not normal. Conversely, only the process of reducing the degree of influence of the abnormal behavior sensor 10B when part of the behavior information detected at a predetermined position is abnormal may be performed.
 (処理フロー)
 次に、演算装置14の処理フローについて説明する。図6は、演算装置の処理フローを説明するフローチャートである。図6に示すように、演算装置14は、挙動情報取得部32により、挙動情報を取得し(ステップS10)、第1判断部34により、所定位置で検出された全ての挙動情報が正常であるかを判断する(ステップS12)。所定位置で検出された全ての挙動情報が正常であると判断された場合(ステップS12;Yes)、演算装置14は、第2AIモデルNを更新することなく、演算部38により、既存の第2AIモデルNを用いて、すなわち既存の第2AIモデルNに挙動情報を入力することで、路面状態の推定結果を取得する(ステップS14)。一方、所定位置で検出された全ての挙動情報が正常でなく(ステップS12;No)、かつ、所定位置における一部の挙動情報のみが異常である場合(ステップS16;Yes)、演算部38は、異常と判断された挙動センサの挙動情報の路面状態への影響度合いを低下させつつ、既存の第2AIモデルNを用いて、すなわち既存の第2AIモデルNに挙動情報を入力することで、路面状態の推定結果を取得する(ステップS18)。また、所定位置で検出された全ての挙動情報が正常でなく(ステップS12;No)、かつ、所定位置における一部の挙動情報のみが異常でない場合(ステップS16;No)、すなわち所定位置で検出された全ての挙動情報が異常である場合、演算装置14は、第2判断部36により、第2領域で検出された挙動情報が正常であるかを判断する(ステップS20)。第2領域で検出された挙動情報が正常と判断された場合(ステップS20;Yes)、ステップS14に進み、演算部38は、既存の第2AIモデルNを用いて、路面状態の推定結果を取得する(ステップS14)。一方、第2領域で検出された挙動情報が正常でない判断された場合(ステップS20;No)、演算装置14は、挙動情報に基づき第2AIモデルNを更新する(ステップS22)。
(processing flow)
Next, the processing flow of the computing device 14 will be described. FIG. 6 is a flowchart for explaining the processing flow of the arithmetic device. As shown in FIG. 6, the computing device 14 acquires the behavior information by the behavior information acquisition unit 32 (step S10), and the first determination unit 34 determines that all the behavior information detected at the predetermined position is normal. (step S12). If it is determined that all the behavior information detected at the predetermined position is normal (step S12; Yes), the arithmetic device 14 allows the arithmetic unit 38 to update the existing second AI model N without updating the second AI model N. By using the model N, that is, by inputting the behavior information to the existing second AI model N, the estimation result of the road surface state is obtained (step S14). On the other hand, if all the behavior information detected at the predetermined position is not normal (step S12; No) and only some of the behavior information at the predetermined position is abnormal (step S16; Yes), the calculation unit 38 , using the existing second AI model N, that is, by inputting the behavior information to the existing second AI model N, while reducing the degree of influence on the road surface state of the behavior information of the behavior sensor determined to be abnormal. A state estimation result is acquired (step S18). Further, when all the behavior information detected at the predetermined position is not normal (step S12; No) and only a part of the behavior information at the predetermined position is not abnormal (step S16; No), that is, when it is detected at the predetermined position If all of the behavior information obtained is abnormal, the arithmetic device 14 determines whether the behavior information detected in the second area is normal by the second determination unit 36 (step S20). If the behavior information detected in the second area is determined to be normal (step S20; Yes), the process proceeds to step S14, and the calculation unit 38 uses the existing second AI model N to acquire the estimation result of the road surface condition. (step S14). On the other hand, when it is determined that the behavior information detected in the second area is not normal (step S20; No), the arithmetic unit 14 updates the second AI model N based on the behavior information (step S22).
 (効果)
 本実施形態に係る演算装置14は、挙動情報取得部32と、第1判断部34と、第2判断部36と、学習部30とを含む。挙動情報取得部32は、道路を走行する車両10に設けられる挙動センサ10Bによって検出された、車両10の挙動を示す挙動情報を取得する。第1判断部34は、所定位置で検出された挙動情報と、所定位置に重なる第1領域で検出された参照挙動情報とに基づき、所定位置で検出された挙動情報が示す車両の挙動と第1領域で検出された参照挙動情報が示す車両の挙動とが対応するかを判断する。なお、参照挙動情報は、挙動情報よりも前に検出された車両10の挙動を指す。第2判断部36は、所定位置で検出された挙動情報が示す車両の挙動と第1領域で検出された参照挙動情報が示す車両の挙動とが対応しないと判断された場合に、第1領域よりも広い第2領域内で検出された複数の挙動情報と、第2領域内で検出された複数の参照挙動情報とに基づき、第2領域内で検出された挙動情報が示す車両の挙動と参照挙動情報が示す車両の挙動とが対応するかを判断する。学習部30は、車両10の挙動と道路の路面状態との対応関係を第2AIモデルN(AIモデル)に機械学習させる。学習部30は、第2領域で検出された挙動情報が示す車両の挙動と参照挙動情報が示す車両の挙動とが対応しないと判断された場合には、第2領域で検出された挙動情報を教師データとして、第2AIモデルNを更新する。
(effect)
The computing device 14 according to the present embodiment includes a behavior information acquisition section 32 , a first determination section 34 , a second determination section 36 and a learning section 30 . The behavior information acquisition unit 32 acquires behavior information indicating the behavior of the vehicle 10 detected by the behavior sensor 10B provided in the vehicle 10 traveling on the road. Based on the behavior information detected at the predetermined position and the reference behavior information detected in the first region overlapping the predetermined position, the first determination unit 34 determines the behavior of the vehicle indicated by the behavior information detected at the predetermined position and the first determination unit 34 . It is determined whether or not the behavior of the vehicle indicated by the reference behavior information detected in one area corresponds. Note that the reference behavior information refers to the behavior of the vehicle 10 detected prior to the behavior information. When it is determined that the behavior of the vehicle indicated by the behavior information detected at the predetermined position does not correspond to the behavior of the vehicle indicated by the reference behavior information detected in the first region, the second determination unit 36 selects the first region. vehicle behavior indicated by the behavior information detected in the second region based on a plurality of pieces of behavior information detected in a second region wider than the second region and a plurality of pieces of reference behavior information detected in the second region; It is determined whether or not the behavior of the vehicle indicated by the reference behavior information corresponds. The learning unit 30 causes the second AI model N (AI model) to machine-learn the correspondence relationship between the behavior of the vehicle 10 and the road surface condition. When it is determined that the behavior of the vehicle indicated by the behavior information detected in the second area does not correspond to the behavior of the vehicle indicated by the reference behavior information, the learning unit 30 changes the behavior information detected in the second area. The second AI model N is updated as teacher data.
 ここで、AIモデルを用いて車両挙動から路面状態を推定する際において、車両挙動を検出する側(ここでは車両10や挙動センサ10B)の特性が変化した場合には、AIモデルを学習させた際とは車両挙動の傾向が変わってしまうため、AIモデルを用いた路面状態の推定精度が低下するおそれがある。それに対し、本実施形態に係る演算装置14は、所定位置で検出された挙動情報が正常でなく、かつ、それよりも広い第2領域内で検出された複数の挙動情報が正常でない場合に、その時点で検出された挙動情報を用いて、第2AIモデルNを更新する。従って、本実施形態によると、車両10の特性が変化していることを適切に検出して、車両10の特性が変化している場合に第2AIモデルNを更新することができ、結果として、路面状態の推定精度の低下を抑制できる。 Here, when estimating the road surface state from the vehicle behavior using the AI model, if the characteristics of the side that detects the vehicle behavior (here, the vehicle 10 and the behavior sensor 10B) change, the AI model is learned. Since the tendency of the vehicle behavior will change, there is a possibility that the estimation accuracy of the road surface state using the AI model will decrease. On the other hand, if behavior information detected at a predetermined position is not normal and a plurality of pieces of behavior information detected in a wider second area are not normal, the arithmetic device 14 according to the present embodiment The behavior information detected at that time is used to update the second AI model N. Therefore, according to the present embodiment, it is possible to appropriately detect that the characteristics of the vehicle 10 have changed, and to update the second AI model N when the characteristics of the vehicle 10 have changed. It is possible to suppress the deterioration of the estimation accuracy of the road surface condition.
 また、学習部30は、所定位置で検出された挙動情報と第1領域で検出された参照挙動情報とが対応すると判断された場合、及び、第2領域で検出された挙動情報と参照挙動情報とが対応すると判断された場合には、挙動情報を用いた第2AIモデルNの更新を実行しない。すなわち、本実施形態においては、所定位置や第2領域で検出された挙動情報が正常である場合には、車両10の特性が変化していないため既存の第2AIモデルNを用い、所定位置及び第2領域で検出された挙動情報が正常でない場合には車両10の特性が変化しているとして第2AIモデルNを更新する。そのため、本実施形態によると、車両10の特性変化に合わせた第2AIモデルNを用いることができるため、路面状態の推定精度の低下を抑制できる。 In addition, when it is determined that the behavior information detected at the predetermined position corresponds to the reference behavior information detected in the first area, and when the behavior information detected in the second area and the reference behavior information If it is determined that the second AI model N corresponds to the behavior information, the update of the second AI model N using the behavior information is not executed. That is, in the present embodiment, when the behavior information detected at the predetermined position and the second area is normal, the existing second AI model N is used because the characteristics of the vehicle 10 have not changed, and If the behavior information detected in the second area is not normal, it is determined that the characteristics of the vehicle 10 have changed, and the second AI model N is updated. Therefore, according to the present embodiment, it is possible to use the second AI model N adapted to changes in the characteristics of the vehicle 10, thereby suppressing a decrease in the estimation accuracy of the road surface state.
 また、第1判断部34及び第2判断部36は、挙動情報が参照挙動情報に対応するか否かを判定可能な第1AIモデルM(学習モデル)に、挙動情報を入力することで、挙動情報が参照挙動情報に対応するか否かを判断する。挙動情報の異常判定に第1AIモデルMを用いることで、挙動情報の異常判定の精度を向上させて、車両10の特性変化を高精度に検出できる。 Further, the first determination unit 34 and the second determination unit 36 input the behavior information to the first AI model M (learning model) capable of determining whether the behavior information corresponds to the reference behavior information, thereby Determine whether the information corresponds to reference behavior information. By using the first AI model M for behavior information abnormality determination, the accuracy of behavior information abnormality determination can be improved, and characteristic changes of the vehicle 10 can be detected with high accuracy.
 また、演算装置14は、第2AIモデルNに挙動情報を入力することで、道路の路面状態の推定結果を取得する演算部38を更に含む。第2AIモデルNを用いることで、路面状態の推定精度の低下を抑制できる。 In addition, the arithmetic device 14 further includes an arithmetic unit 38 that inputs the behavior information to the second AI model N and obtains the estimation result of the road surface condition. By using the second AI model N, it is possible to suppress a decrease in the estimation accuracy of the road surface state.
 また、挙動情報取得部32は、車両10に設けられた複数の挙動センサ10Bのそれぞれによって検出された挙動情報を取得する。第1判断部34は、複数の挙動センサ10Bのそれぞれについて、所定位置で検出された挙動情報と第1領域で検出された参照挙動情報とが対応するかを判断する。演算部38は、所定位置で検出された一部の挙動情報のみが参照挙動情報に対応しない場合に、挙動情報が参照挙動情報に対応しない挙動センサ10B(異常センサ)の挙動情報による、路面状態への影響度合いが、異常センサ以外の挙動センサ10Bの挙動情報による、路面状態への影響度合いよりも低くなるように、路面状態の推定結果を取得する。本実施形態によると、一部の挙動センサ10Bの特性が変化している場合に、その挙動センサ10Bの影響度合いを低くするため、路面状態の推定精度の低下を抑制できる。 Also, the behavior information acquisition unit 32 acquires behavior information detected by each of the plurality of behavior sensors 10B provided in the vehicle 10 . The first determination unit 34 determines whether the behavior information detected at the predetermined position and the reference behavior information detected in the first area correspond to each of the plurality of behavior sensors 10B. When only part of the behavior information detected at a predetermined position does not correspond to the reference behavior information, the calculation unit 38 detects the road surface state based on the behavior information of the behavior sensor 10B (abnormal sensor) whose behavior information does not correspond to the reference behavior information. The estimation result of the road surface condition is obtained so that the degree of influence on the road surface condition is lower than the degree of influence on the road surface condition by the behavior information of the behavior sensor 10B other than the abnormality sensor. According to the present embodiment, when the characteristics of some of the behavior sensors 10B are changed, the degree of influence of the behavior sensors 10B is reduced, so it is possible to suppress deterioration in the estimation accuracy of the road surface state.
 また、第2AIモデルNは、挙動情報が正常でない挙動センサ10B(異常センサ)に検出された挙動情報が入力される第1モデルと、異常センサ以外の挙動センサ10Bに検出された挙動情報が入力される第2モデルとを含む。演算部38は、第1モデルに異常センサに検出された挙動情報を入力することで取得した路面状態の第1推定結果と、第2モデルに異常センサ以外の挙動センサに検出された挙動情報を入力することで取得した路面状態の第2推定結果と、に基づき、第1推定結果の路面状態の推定結果への影響度合いが、第2推定結果の路面状態の推定結果への影響度合いよりも低くなるように、路面状態の推定結果を算出する。本実施形態によると、一部の挙動センサ10Bの特性が変化している場合に、その挙動センサ10Bのデータを利用する第2AIモデルNの、路面状態への影響度合いを低くするため、路面状態の推定精度の低下を抑制できる。 The second AI model N is the first model to which the behavior information detected by the behavior sensor 10B (abnormal sensor) whose behavior information is not normal is input, and the behavior information detected by the behavior sensor 10B other than the abnormal sensor is input. and a second model that is The calculation unit 38 inputs the first estimation result of the road surface state obtained by inputting the behavior information detected by the abnormality sensor into the first model, and the behavior information detected by the behavior sensor other than the abnormality sensor into the second model. The degree of influence of the first estimation result on the estimation result of the road surface condition is higher than the degree of influence of the second estimation result on the estimation result of the road surface condition based on the second estimation result of the road surface condition acquired by inputting The estimation result of the road surface condition is calculated so as to be low. According to the present embodiment, when the characteristics of some of the behavior sensors 10B have changed, the road surface condition It is possible to suppress the decrease in the estimation accuracy of
 以上、本発明の実施形態及び実施例を説明したが、これら実施形態等の内容により実施形態が限定されるものではない。また、前述した構成要素には、当業者が容易に想定できるもの、実質的に同一のもの、いわゆる均等の範囲のものが含まれる。さらに、前述した構成要素は適宜組み合わせることが可能である。さらに、前述した実施形態等の要旨を逸脱しない範囲で構成要素の種々の省略、置換又は変更を行うことができる。 Although the embodiments and examples of the present invention have been described above, the embodiments are not limited by the contents of these embodiments. In addition, the components described above include those that can be easily assumed by those skilled in the art, those that are substantially the same, and those within the so-called equivalent range. Furthermore, the components described above can be combined as appropriate. Further, various omissions, replacements, or modifications of components can be made without departing from the scope of the above-described embodiments.
 1 検出システム
 10 車両
 10B 挙動センサ
 12 測定データ取得装置
 14 演算装置
 30 学習部
 32 挙動情報取得部
 34 第1判断部
 36 第2判断部
 38 演算部
 M 第1AIモデル(学習モデル)
 N 第2AIモデル(AIモデル)
1 detection system 10 vehicle 10B behavior sensor 12 measurement data acquisition device 14 arithmetic device 30 learning unit 32 behavior information acquisition unit 34 first judgment unit 36 second judgment unit 38 arithmetic unit M first AI model (learning model)
N 2nd AI model (AI model)

Claims (5)

  1.  道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得する挙動情報取得部と、
     所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断する第1判断部と、
     前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断する第2判断部と、
     前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させる学習部と、
     を有し、
     前記学習部は、前記第2領域で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合には、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する、
     演算装置。
    a behavior information acquisition unit that acquires behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road;
    the behavior information detected at a predetermined position; and first reference behavior information, which is vehicle behavior detected in a first region overlapping the predetermined position, before the behavior information detected at the predetermined position. a first determination unit that determines whether the behavior of the vehicle indicated by the behavior information detected at the predetermined position corresponds to the behavior of the vehicle indicated by the first reference behavior information,
    If it is determined that the behavior of the vehicle indicated by the behavior information detected at the predetermined position does not correspond to the behavior of the vehicle indicated by the first reference behavior information, the first The behavior information detected within a second area wider than the area, and the second reference behavior information detected within the second area before the behavior information detected within the second area. a second determination unit that determines whether the behavior of the vehicle indicated by the behavior information detected in the second area corresponds to the behavior of the vehicle indicated by the second reference behavior information,
    a learning unit that causes an AI model to perform machine learning of a correspondence relationship between the behavior of the vehicle and the road surface condition of the road;
    has
    If it is determined that the behavior of the vehicle indicated by the behavior information detected in the second area does not correspond to the behavior of the vehicle indicated by the second reference behavior information, the learning unit updating the AI model using the detected behavior information as teacher data;
    Arithmetic unit.
  2.  前記挙動情報取得部は、前記車両に設けられた複数の前記挙動センサのそれぞれによって検出される前記挙動情報を取得し、
     前記第1判断部は、複数の前記挙動センサのそれぞれについて、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断し、
     演算部は、前記所定位置で検出される一部の前記挙動情報が示す車両の挙動のみが前記第1参照挙動情報が示す車両の挙動に対応しない場合に、前記挙動情報が前記第1参照挙動情報に対応しない前記挙動センサである異常センサの前記挙動情報による、前記路面状態への影響度合いが、前記異常センサ以外の前記挙動センサの前記挙動情報による、前記路面状態への影響度合いよりも低くなるように、前記路面状態の推定結果を取得する、請求項1に記載の演算装置。
    The behavior information acquisition unit acquires the behavior information detected by each of the plurality of behavior sensors provided in the vehicle,
    The first determination unit determines whether the behavior of the vehicle indicated by the behavior information detected at the predetermined position corresponds to the behavior of the vehicle indicated by the first reference behavior information for each of the plurality of behavior sensors. death,
    The computing unit determines that, when only the behavior of the vehicle indicated by the part of the behavior information detected at the predetermined position does not correspond to the behavior of the vehicle indicated by the first reference behavior information, the behavior information indicates the first reference behavior. The degree of influence on the road surface state by the behavior information of the abnormal sensor that is the behavior sensor that does not correspond to the information is lower than the degree of influence on the road surface state by the behavior information of the behavior sensor other than the abnormal sensor. 2. The arithmetic unit according to claim 1, wherein the estimation result of the road surface condition is obtained such that
  3.  前記AIモデルは、前記異常センサに検出される前記挙動情報が入力される第1モデルと、前記異常センサ以外の前記挙動センサに検出される前記挙動情報が入力される第2モデルとを含み、
     前記演算部は、前記第1モデルに前記異常センサに検出される前記挙動情報を入力することで取得した前記路面状態の第1推定結果と、前記第2モデルに前記異常センサ以外の前記挙動センサに検出される前記挙動情報を入力することで取得した前記路面状態の第2推定結果と、に基づき、前記第1推定結果の前記路面状態の推定結果への影響度合いが、前記第2推定結果の前記路面状態の推定結果への影響度合いよりも低くなるように、前記路面状態の推定結果を算出する、請求項2に記載の演算装置。
    The AI model includes a first model to which the behavior information detected by the abnormality sensor is input, and a second model to which the behavior information detected by the behavior sensor other than the abnormality sensor is input,
    The calculation unit inputs the behavior information detected by the abnormality sensor into the first model to obtain a first estimation result of the road surface state, and the behavior sensor other than the abnormality sensor into the second model. and the degree of influence of the first estimation result on the estimation result of the road surface condition, based on the second estimation result obtained by inputting the behavior information detected in 3. The arithmetic unit according to claim 2, wherein the estimation result of the road surface condition is calculated so as to be lower than the degree of influence of the estimation result of the road surface condition.
  4.  道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得するステップと、
     所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、
     前記所定位置で検出される前記挙動情報と前記第1参照挙動情報とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、
     前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させるステップと、
     を有し、
     前記AIモデルに機械学習させるステップにおいては、前記第2領域で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合には、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する、
     演算方法。
    a step of acquiring behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road;
    the behavior information detected at a predetermined position; and first reference behavior information, which is vehicle behavior detected in a first region overlapping the predetermined position, before the behavior information detected at the predetermined position. determining whether the vehicle behavior indicated by the behavior information detected at the predetermined position and the vehicle behavior indicated by the first reference behavior information correspond to each other;
    When it is determined that the behavior information detected at the predetermined position does not correspond to the first reference behavior information, detection is performed within a second area that includes the entire first area and is wider than the first area. detected in the second area based on the behavior information detected in the second area and second reference behavior information detected in the second area before the behavior information detected in the second area a step of determining whether the behavior of the vehicle indicated by the behavior information indicated by the second reference behavior information corresponds to the behavior of the vehicle indicated by the second reference behavior information;
    causing an AI model to perform machine learning of a correspondence relationship between the behavior of the vehicle and the surface condition of the road;
    has
    In the step of causing the AI model to perform machine learning, if it is determined that the behavior of the vehicle indicated by the behavior information detected in the second region does not correspond to the behavior of the vehicle indicated by the second reference behavior information, , updating the AI model using the behavior information detected in the second region as teacher data;
    Arithmetic method.
  5.  道路を走行する車両に設けられる挙動センサによって検出される、前記車両の挙動を示す挙動情報を取得するステップと、
     所定位置で検出される前記挙動情報と、前記所定位置で検出される前記挙動情報よりも前に、前記所定位置に重なる第1領域で検出された車両の挙動である第1参照挙動情報とに基づき、前記所定位置で検出される前記挙動情報が示す車両の挙動と前記第1参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、
     前記所定位置で検出される前記挙動情報と前記第1参照挙動情報とが対応しないと判断された場合に、前記第1領域の全域を含み、前記第1領域よりも広い第2領域内で検出される前記挙動情報と、前記第2領域内で検出される前記挙動情報よりも前に、前記第2領域内で検出された第2参照挙動情報とに基づき、前記第2領域内で検出される前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応するかを判断するステップと、
     前記車両の挙動と前記道路の路面状態との対応関係をAIモデルに機械学習させるステップと、
     をコンピュータに実行させるプログラムであって、
     前記AIモデルに機械学習させるステップにおいては、前記第2領域における前記挙動情報が示す車両の挙動と前記第2参照挙動情報が示す車両の挙動とが対応しないと判断された場合に、前記第2領域で検出される前記挙動情報を教師データとして、前記AIモデルを更新する、
     プログラム。
    a step of acquiring behavior information indicating the behavior of the vehicle detected by a behavior sensor provided in the vehicle traveling on the road;
    the behavior information detected at a predetermined position; and first reference behavior information, which is vehicle behavior detected in a first region overlapping the predetermined position, before the behavior information detected at the predetermined position. determining whether the vehicle behavior indicated by the behavior information detected at the predetermined position and the vehicle behavior indicated by the first reference behavior information correspond to each other;
    When it is determined that the behavior information detected at the predetermined position does not correspond to the first reference behavior information, detection is performed within a second area that includes the entire first area and is wider than the first area. detected in the second area based on the behavior information detected in the second area and second reference behavior information detected in the second area before the behavior information detected in the second area a step of determining whether the behavior of the vehicle indicated by the behavior information indicated by the second reference behavior information corresponds to the behavior of the vehicle indicated by the second reference behavior information;
    causing an AI model to perform machine learning of a correspondence relationship between the behavior of the vehicle and the surface condition of the road;
    A program that causes a computer to execute
    In the step of causing the AI model to perform machine learning, when it is determined that the behavior of the vehicle indicated by the behavior information in the second region does not correspond to the behavior of the vehicle indicated by the second reference behavior information, the second updating the AI model using the behavior information detected in the area as teacher data;
    program.
PCT/JP2022/027926 2021-07-16 2022-07-15 Calculation device, calculation method, and program WO2023286866A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022562408A JP7232967B1 (en) 2021-07-16 2022-07-15 Arithmetic device, arithmetic method and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021118197 2021-07-16
JP2021-118197 2021-07-16

Publications (1)

Publication Number Publication Date
WO2023286866A1 true WO2023286866A1 (en) 2023-01-19

Family

ID=84920357

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/027926 WO2023286866A1 (en) 2021-07-16 2022-07-15 Calculation device, calculation method, and program

Country Status (2)

Country Link
JP (1) JP7232967B1 (en)
WO (1) WO2023286866A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020013537A (en) * 2018-04-25 2020-01-23 トヨタ自動車株式会社 Road surface condition estimation device and road surface condition estimation method
JP2020144572A (en) * 2019-03-06 2020-09-10 エヌ・ティ・ティ・コムウェア株式会社 Road failure detection device, road failure detection method, and road failure detection program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020013537A (en) * 2018-04-25 2020-01-23 トヨタ自動車株式会社 Road surface condition estimation device and road surface condition estimation method
JP2020144572A (en) * 2019-03-06 2020-09-10 エヌ・ティ・ティ・コムウェア株式会社 Road failure detection device, road failure detection method, and road failure detection program

Also Published As

Publication number Publication date
JPWO2023286866A1 (en) 2023-01-19
JP7232967B1 (en) 2023-03-03

Similar Documents

Publication Publication Date Title
JP6884276B2 (en) Systems and methods for controlling vehicles
CN108981692B (en) Train positioning method and system based on inertial navigation/visual odometer
US7418372B2 (en) Model predictive control apparatus
US20210365033A1 (en) Model predictive control device, computer readable medium, model predictive control system and model predictive control method
JP7232967B1 (en) Arithmetic device, arithmetic method and program
US20100100360A1 (en) Model-based road surface condition identification
CN113759729B (en) Vehicle transverse control method and device and electronic equipment
JP6939759B2 (en) Vehicle condition estimation device
BR102020023962A2 (en) SYSTEM AND METHOD FOR ESTIMATION OF THE VEHICLE'S SIDE SLIDE ANGLE
JP7166447B2 (en) Methods for determining motion vectors of motor vehicles, methods for determining vehicle speed, and related vehicles
CN114913500B (en) Pose determination method and device, computer equipment and storage medium
CN114435371A (en) Road slope estimation method and device
JP7028223B2 (en) Self-position estimator
WO2023189972A1 (en) Computation device, computation method, and program
WO2023189971A1 (en) Computation device, computation method, and program
WO2022113909A1 (en) Calculating device and program
JP2023150597A (en) Processing unit, processing method, and program
JP2023150815A (en) Calculation device, calculation method and program
JP7228629B2 (en) Arithmetic unit
WO2023176955A1 (en) Computing device, computing method, and program
JP2022076948A (en) Arithmetic operation device and program
JP7206875B2 (en) Vehicle lateral speed estimator
US20220412756A1 (en) Information processing apparatus, information processing method, and storage medium
CN113763483B (en) Method and device for calibrating pitch angle of automobile data recorder
JP2022077450A (en) Arithmetic unit and program

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022562408

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22842207

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE