WO2020003392A1 - Driving assistance device and driving mode assessment model generation device - Google Patents

Driving assistance device and driving mode assessment model generation device Download PDF

Info

Publication number
WO2020003392A1
WO2020003392A1 PCT/JP2018/024277 JP2018024277W WO2020003392A1 WO 2020003392 A1 WO2020003392 A1 WO 2020003392A1 JP 2018024277 W JP2018024277 W JP 2018024277W WO 2020003392 A1 WO2020003392 A1 WO 2020003392A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
surrounding environment
driving mode
driving
driver state
Prior art date
Application number
PCT/JP2018/024277
Other languages
French (fr)
Japanese (ja)
Inventor
▲イ▼ 景
昌彦 谷本
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2018/024277 priority Critical patent/WO2020003392A1/en
Priority to JP2020523817A priority patent/JP6746043B2/en
Publication of WO2020003392A1 publication Critical patent/WO2020003392A1/en

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions

Definitions

  • the present invention relates to a driving support device that supports setting of a driving mode suitable for a situation in a vehicle having an automatic driving function, and a device that generates a driving mode determination model used for the driving support device.
  • Patent Document 1 As a technique for selecting an appropriate driving mode, for example, in Patent Document 1, it is determined whether a driver can return to manual driving based on a result of detecting a driver's action and a driver's state in a vehicle that is being driven automatically. A technique for performing this is disclosed. Similarly, Patent Literature 1 discloses a method of determining whether or not automatic driving can be continued in a vehicle that is being automatically driven based on travel position information, map data, and vehicle travel information of the vehicle.
  • Patent Literature 1 it is determined whether or not it is possible to return to the manual driving based only on the driver's state. It can be different. For example, if the driver is a little tired, it is possible to drive manually on a straight road with few pedestrians or other vehicles, but there are complicated lane branches and pedestrians and other In a situation where many vehicles exist, it is difficult to manually drive the vehicle unless the driver can pay close attention. Similarly, the determination as to whether or not to allow automatic driving may depend on not only the surrounding environment of the vehicle but also the state of the driver. As described above, it is desirable to determine whether or not automatic driving and manual driving are possible, that is, to determine the driving mode based on a combination of the driver's state and the surrounding environment of the vehicle. In the case of automatic driving, the driving is performed based only on the environment around the vehicle.
  • a first object of the present invention is to provide support for appropriately determining a driving mode based on both a driver's state and a surrounding environment of a vehicle. It is intended to obtain a driving support device capable of performing the following.
  • a second object is to provide a driving mode determination model generation device that generates a driving mode determination model suitable for use in the driving support device.
  • a surrounding environment information output unit that outputs surrounding environment information of a vehicle
  • a driver state information output unit that outputs state information of a driver corresponding to the surrounding environment information
  • a surrounding environment information
  • a driving mode evaluation unit that calculates an evaluation value indicating the suitability of the automatic driving and the manual driving for the combination based on the combination of the information and the driver state information.
  • the device of the present invention uses this evaluation value to calculate an evaluation value representing the suitability of automatic driving and manual driving for the combination based on the combination of the surrounding environment information of the vehicle and the driver state information, It can assist in determining an appropriate driving mode according to the surrounding environment and the driver's condition.
  • FIG. 1 is a configuration diagram illustrating a configuration of a driving support device and a driving mode determination model generation device according to a first embodiment. Schematic diagram illustrating learning data stored in a learning data storage unit provided in the driving mode determination model generation device.
  • FIG. 1 is a configuration diagram illustrating a hardware configuration of a driving control device, an information server, a surrounding environment detection unit, and a driver state detection unit according to Embodiment 1.
  • 4 is a flowchart showing the operation of the driving control device including the driving support device according to the first embodiment.
  • 5 is a flowchart illustrating the operation of the information server including the driving mode determination model generation device according to the first embodiment.
  • 5 is a flowchart showing the operation of an operation control device including a driving assistance device according to Embodiment 2 and an information server including an operation mode determination model generation device.
  • FIG. 1 is a configuration diagram showing a configuration of a driving support device 1 and a driving mode determination model generation device 2 according to Embodiment 1 for carrying out the present invention.
  • the driving support device 1 is provided as a part of a driving control device 3 that controls driving of a vehicle
  • the driving mode determination model generating device 2 is a part of an information server 4 that appropriately provides various information services to the vehicle. It is provided as.
  • the operation control device 3 is mounted on a vehicle
  • the information server 4 is installed at an arbitrary location and connected to the operation control device 3 mounted on the vehicle via a communication line.
  • the driving control device 3 includes a driving control unit 10 that performs overall driving control of the vehicle.
  • the driving control unit 10 performs control of the vehicle based on input information, including switching between automatic driving and manual driving of the vehicle. Operation control is performed by controlling drive system devices such as deceleration and steering.
  • the driving control device 3 includes a driving support device 1, a driving mode candidate selection unit 7, a driving mode presenting unit 8, a driving mode input unit 9, a vehicle communication, as a configuration related to support for determining a driving mode according to the present invention.
  • a section 11 is provided. Further, a surrounding environment detection unit 5 and a driver state detection unit 6 are connected to the driving support device 1 via a communication line.
  • the surrounding environment detection unit 5 is mounted on a vehicle or installed in an urban area or a road on which the vehicle runs, and detects a surrounding environment such as the presence or absence and position of a person, a vehicle, a structure, or the like around the vehicle, and provides a driving assistance device. 1 Specifically, the surrounding environment detection unit 5 detects surrounding environment information of the vehicle, and outputs surrounding environment detection data representing the detected surrounding environment to the surrounding environment information output unit 13, and a plurality of surrounding environment detection units may be provided. . Examples of the surrounding environment detected by the surrounding environment detecting unit 5 include pedestrians and other vehicles, structures existing on roads, obstacles, and the like.
  • the surrounding environment detecting unit 5 includes various sensors that detect these, for example, , A camera, Lidar (Light Detection and Ranging), Radar (Radio Detection and Ranging), and a communication device that outputs information detected by the sensor to the peripheral environment information output unit 13 as peripheral environment detection data.
  • the surrounding environment detection unit 5 only needs to be able to detect the surrounding environment of the vehicle, and may be mounted on the vehicle or may be installed in a facility such as a traffic light installed on a road. For example, if the sensor provided in the surrounding environment detection unit 5 is a camera mounted on a vehicle, image information obtained by imaging the surroundings from the vehicle can be obtained.
  • the sensor is a Lidar, Radar, or the like installed on a traffic signal. It is also possible to obtain detection information of pedestrians, other traveling vehicles, obstacles, and the like around a traffic signal near a vehicle, and in particular, detection information of a target that cannot be imaged by being hidden by another object from a camera installed in the vehicle. .
  • the driver status detection unit 6 is mounted on the vehicle, detects the status of the driver, and informs the driving support device 1 of the detected status. Specifically, the driver status detection unit 6 detects the status of the driver, and outputs driver status detection data representing the detected driver status to the driver status information output unit 14. There may be. Examples of the driver state acquired by the driver state detection unit 6 include electrocardiogram, myoelectricity, eye movement, brain wave, breathing, pressure, sweating, and the like. The driver state detection unit 6 detects these. It is composed of a biosensor and a camera, and a communication device that outputs detected information to the driver status information output unit 14 as driver status detection data.
  • the driving support device 1 When driving a vehicle having an automatic driving function, the driving support device 1 takes into account the driver's state transmitted from the driver state detection unit 6 and the surrounding environment of the vehicle transmitted from the surrounding environment detection unit 5.
  • a device that appropriately switches between automatic operation and manual operation that is, a device that assists in selecting an appropriate operation mode. Specifically, it represents the suitability of each of automatic operation and manual operation according to the current situation.
  • the evaluation value is calculated and presented. Since the calculated evaluation value is calculated based on the driver's state and the surrounding environment of the vehicle, the evaluation value is a useful judgment material when the driver or the system controlling the driving mode determines the driving mode. The detailed configuration of the driving support device 1 will be described later.
  • the driving mode candidate selection unit 7 selects an appropriate driving mode candidate based on the evaluation value of the driving mode calculated by the driving support device 1.
  • the selected driving mode candidate and the driving control unit 10 apply the selected driving mode candidate.
  • the current driving modes are compared, and if they match, the fact that the current driving mode is appropriate is presented to the driving mode presentation unit 8 as driving support information, and if they do not match, they are selected as driving support information.
  • the driving mode presenting unit 8 presents the content that prompts the user to switch to the driving mode candidate.
  • the driving mode presenting unit 8 includes, for example, a speaker that outputs audio, a display that displays a screen, and the like.
  • the driving mode candidate selecting unit 7 also provides a signal for recommending forcible switching to the driving control unit 10 when urgent switching of the driving mode is required, for example, when the vehicle is rapidly approaching a road facility or the like. Is output, whereby the operation control unit 10 can forcibly switch the operation mode.
  • the driving mode input unit 9 is for inputting whether or not the driver should switch the driving mode based on the driving mode candidates presented by the driving mode presentation unit 8.
  • a signal indicating the switching to the driving mode candidate is output to the driving control unit 10.
  • the operation control unit 10 switches the operation mode and performs operation control according to the switched operation mode.
  • the operation control unit 10 may forcibly switch the operation mode based on a signal from the operation mode candidate selection unit 7 that recommends the forcible switching.
  • the vehicle communication unit 11 communicates with the server communication unit 12 provided in the information server 4, and exchanges information between the operation control device 3 and the information server 4.
  • a configuration is shown in which the operation mode determination model generated by the operation mode determination model generation device 2 is acquired and output to the determination model storage unit 16.
  • the operation control device 3 is configured as described above. Further, the configuration of the driving support device 1 of the present invention included in the driving control device 3 will be described.
  • the driving support device 1 includes a surrounding environment information output unit 13, a driver state information output unit 14, a driving mode evaluation unit 15, and a judgment model storage unit 16.
  • the surrounding environment information output unit 13 generates surrounding environment information in a format suitable for determining the driving mode based on the surrounding environment detection data of the vehicle obtained from the surrounding environment detecting unit 5, and outputs the generated surrounding environment information to the driving mode evaluation unit 15. Is what you do. More specifically, the peripheral environment information output unit 13 includes a peripheral environment description unit 17, a morpheme extraction unit 18, and a numerical vectorization unit 19.
  • the surrounding environment description unit 17 uses the surrounding environment detection data obtained by the surrounding environment detection unit 5 to recognize pedestrians, other vehicles, obstacles, and the like, and to describe the surrounding environment as a character string in a natural language. This is for generating column information. The generated character string information is output to the morpheme extraction unit 18.
  • the morpheme extraction unit 18 performs a morpheme analysis on the character string information generated by the peripheral environment description unit 17, extracts morphemes, and outputs the extracted morphemes to the numerical vectorization unit 19.
  • the morphological analysis is a language processing technique that divides a text into morphemes, which are minimum units having a meaning in a language. For example, a character string "three pedestrians ahead" is divided into “", “", "", "", "", "", "”.
  • the numerical vectorization unit 19 converts the morpheme extracted by the morpheme extraction unit 18 into a numerical vector.
  • the One-Hot model is used as a method for converting into a numerical vector.
  • the One-Hot model is a model in which the number of words in the corpus is the number of dimensions of a vector, and the word is expressed as 1 when a word appears in each dimension, and is expressed as 0 when no word appears in each dimension.
  • Word2Vector or the like can be used.
  • the morpheme is converted into a numeric vector by the numeric vectorization unit 19, but this is for the purpose of digitizing the morpheme.
  • the method of numerical conversion is not limited to vectorization, and for example, a method of converting into a higher-order tensor such as a scalar or a matrix may be used.
  • the vectorized morpheme obtained by the numerical vectorization unit 19 is output to the driving mode evaluation unit 15 as surrounding environment information.
  • the driver status information output unit 14 generates driver status information in a format suitable for determining a driving mode based on the driver detection data obtained from the driver status detection unit 6, and outputs the driving mode evaluation unit. 15 is output. More specifically, the driver status information output unit 14 includes a driver status feature amount extraction unit 20.
  • the driver state characteristic amount extraction unit 20 extracts the biological signal characteristic amount of the driver state used by the driving mode evaluation unit 15 using the driver state detection data obtained by the driver state detection unit 6. is there.
  • the biological signal characteristic amount for example, a heartbeat interval, a QRS width, and a QRS wave height are obtained as a biological signal characteristic amount from an electrocardiogram signal obtained as driver state detection data, and are obtained as driver state detection data.
  • the pupil diameter is obtained from the image data of the obtained eyeball as a biological signal feature amount.
  • the driving mode evaluation unit 15 outputs the surrounding environment information output from the surrounding environment information output unit 13, the driver state information output from the driver state information output unit 14, and the driving stored in the judgment model storage unit 16. Based on the mode determination model, an evaluation value representing a degree of suitability for manual operation and automatic operation is calculated, and the evaluation value is output to the operation mode candidate selection unit 7.
  • the driving mode determination model indicates the degree of influence of the surrounding environment of the vehicle and the state of the driver on each of the automatic driving and the manual driving, and by appropriately constructing this driving mode determination model, It is possible to more appropriately calculate the evaluation values of the automatic operation and the manual operation that are useful for selecting an appropriate operation mode.
  • the operation mode determination model is generated in advance using prepared learning data and the like, and is stored in the determination model storage unit 16.
  • the judgment model storage unit 16 stores the driving mode judgment model acquired by the vehicle communication unit 11 from the information server 4 through communication with the server communication unit 12.
  • the driving support device 1 is configured as described above. Next, the operation mode determination model generation device 2 of the present invention will be described. As described above, the driving mode determination model used in the driving support device 1 is generated in advance and stored in the determination model storage unit 16. The operation mode determination model generation device 2 generates the operation mode determination model. In the present embodiment, the driving mode determination model generation device 2 is provided in the information server 4 that provides various information services to vehicles, and the generated driving mode determination model appropriately controls driving of each vehicle. Provided to the device 3.
  • the information server 4 includes a judgment model storage unit 21 and a server communication unit 12 in addition to the operation mode judgment model generation device 2. Although the information server 4 provides various types of traffic information other than the driving mode determination model to the vehicle, it is omitted in the present embodiment.
  • the judgment model storage unit 21 stores the operation mode judgment model generated by the operation mode judgment model generation device 2, and upon receiving a request signal from a vehicle via the server communication unit 12, stores the stored operation mode.
  • the mode determination model is provided to the requesting vehicle via the server communication unit 12.
  • the server communication unit 12 performs various communications with the vehicle communication unit 11, and in this embodiment, in particular, outputs the driving mode determination model acquired from the determination model storage unit 21 to the vehicle communication unit 11.
  • the driving mode determination model generation device 2 is a device that generates a driving mode determination model used when calculating an evaluation value by the driving support device 1 described above. As shown in FIG. 1, the driving mode determination model generation device 2 includes a learning data storage unit 22, a surrounding environment description unit 23, a morphological extraction unit 24, a numerical vectorization unit 25, a driver state feature amount extraction unit 26, A mode determination model generator 27 is provided.
  • the learning data storage unit 22 stores learning data for generating a driving mode determination model.
  • the learning data storage unit 22 stores the learning data in the surrounding environment description unit 23 and the driver state feature amount extraction.
  • the unit 26 outputs the learning data stored in the driving mode determination model generation unit 27.
  • FIG. 2 shows an example of learning data stored in the learning data storage unit 22.
  • one row indicates one record
  • one record indicates a pair of individual surrounding environment detection data and driver state detection data, and an appropriate driving mode corresponding to each of these data pairs. It consists of operation mode data as operation mode information to be shown.
  • the learning data is composed of a set of many records. An identifier represented by “No.” is assigned to the learning data.
  • Such learning data is obtained by, for example, actually measuring the surrounding environment and the driver's state when the vehicle is actually driven, and specifying an appropriate driving mode at that time.
  • a detection device similar to the surrounding environment detection unit 5 and the driver state detection unit 6 described with reference to FIG.
  • the data detected by the detecting devices at the same timing are acquired as surrounding environment detection data and driver state detection data. Furthermore, the evaluator who boarded the vehicle specified driving modes suitable for various situations during or after the vehicle was running, and the surrounding environment detection data and driver status detection data corresponded to those detection data and the appropriate driving mode. By doing so, learning data as shown in FIG. 2 is obtained. When generating the learning data, it is of course possible to associate the manual operation as the operation mode with the detection data when the manual operation is performed, but the detection data during the manual operation is also supported. If it is determined that automatic operation is suitable as the operation mode to be performed, automatic operation may be designated.
  • the learning data obtained in this way is a large collection of data sets of various combinations of surrounding environments and driver states and appropriate driving modes corresponding to the combinations.
  • the driving mode determination model generation device 2 digitizes the surrounding environment detection data and the driver state detection data among the learning data stored in the learning data storage unit 22, and uses a statistical method. An operation mode determination model is generated.
  • the surrounding environment description unit 23 uses the surrounding environment detection data input from the learning data storage unit 22 to generate character string information in which the surrounding environment is described by a character string in a natural language, and generates the generated character string information. Is output to the morpheme extraction unit 24.
  • the morpheme extracting unit 24 performs a morphological analysis on the character string information generated by the peripheral environment description unit 23, extracts morphemes, and outputs the extracted morphemes to the numerical vectorization unit 25.
  • the numerical vectorization unit 25 converts the morpheme extracted by the morpheme extraction unit 24 into a numerical vector and outputs the vector to the operation mode determination model generation unit 27.
  • driver state feature amount extraction unit 26 extracts the biometric signal feature amount of the driver state using the driver state detection data input from the learning data storage unit 22, and outputs the driving mode determination model generation unit 27. Output to
  • the driving mode determination model generation unit 27 includes a numerical vector output by the numerical vectorization unit 25 as surrounding environment information, a driver's biological signal characteristic amount output by the driver state characteristic amount extraction unit 26 as driver state information, Then, using the surrounding environment information and the driving mode data corresponding to the driver state information stored in the learning data storage unit 22, the driving mode determination model is generated. For example, when a person is driving asleep, the vehicle should run in the automatic driving mode, but the heart rate decreases during the driving asleep. Therefore, in order to determine whether the automatic driving is in an appropriate driving mode, the heartbeat interval is important as a driver state feature, and the influence of the heartbeat interval corresponding to the label "automatic driving" of the driving mode determination model is large. Become.
  • the driving mode determination model generation unit 27 performs the same processing as described above for all data included in the learning data, and for each driving mode, each component of the numerical value vector of the surrounding environment information and each of the driver state information. The degree of influence of the biological signal feature is optimized, and a driving mode determination model is generated. The generated operation mode determination model is output to the determination model storage unit 21.
  • FIG. 3 is a configuration diagram showing a hardware configuration of the operation control device 3, the information server 4, and the like shown in FIG. 1, and includes processing devices 28 and 29 such as a CPU (Central Processing Unit), a ROM (Read Only Memory), and a ROM (Read Only Memory).
  • a storage device 30 such as a hard disk device, a storage device 31, an input device 32 such as a sensor, an input device 33, an input device 34, an output device 35 such as a speaker or a display, a communication device 36, and a communication device 37 are bus-connected. It is the configuration that was done.
  • the CPU may have its own memory.
  • the surrounding environment detecting unit 5 shown in FIG. 1 is realized by the input device 32, the driver state detecting unit 6 is realized by the input device 33, the driving mode input unit 9 is realized by the input device 34, and the driving mode presenting unit 8 is realized by the output device 35.
  • the data stored in the judgment model storage unit 16 is stored in the storage device 30.
  • the peripheral environment description unit 17, the morpheme extraction unit 18, the numerical vectorization unit 19, the driver state feature amount extraction unit 20, the operation mode evaluation unit 15, the operation mode candidate selection unit 7, and the operation control unit 10 are stored in the storage device 30. This is realized by executing the stored program in the processing device 28.
  • the processing device 28 appropriately reads and executes the program stored in the storage device 30 to execute the peripheral environment description unit 17, the morphological extraction unit 18, the numerical vectorization unit 19, the driver state feature amount extraction unit 20,
  • the functions of the mode evaluation unit 15, the operation mode candidate selection unit 7, and the operation control unit 10 are realized. Note that the above functions are not limited to the combination of hardware and software, and the above programs may be implemented in the processing device 28 and realized by a single piece of hardware such as a system LSI (Large Scale Integrated Integrated Circuit).
  • the data stored in the judgment model storage unit 21 and the learning data storage unit 22 are stored in the storage device 31.
  • the peripheral environment description unit 23, the morphological extraction unit 24, the numerical vectorization unit 25, the driver state feature amount extraction unit 26, and the driving mode judgment model generation unit 27 execute the program stored in the storage device 31 by the processing device 29. It is realized by doing.
  • the processing device 29 appropriately reads and executes the program stored in the storage device 31 to execute the surrounding environment description unit 23, the morphological extraction unit 24, the numerical vectorization unit 25, the driver state feature amount extraction unit 26, The function of the mode determination model generation unit 27 is realized. Note that the above functions are not limited to the combination of hardware and software, as in the case of the processing device 28, and the above-described program may be implemented in the processing device 29 and realized by hardware alone.
  • the vehicle communication unit 11 is realized by the communication device 36
  • the server communication unit 12 is realized by the communication device 37.
  • the surrounding environment detecting unit 5 detects the surrounding environment of the vehicle using various sensors.
  • the surrounding environment to be detected include pedestrians and other vehicles, structures existing on roads, obstacles, and the like, and various sensors are mounted on the vehicle and are cameras for imaging the front.
  • the surrounding environment detection unit 5 outputs surrounding environment detection data representing the detected surrounding environment to the surrounding environment information output unit 13.
  • step S2 when the peripheral environment information output unit 13 receives the peripheral environment detection data obtained by the peripheral environment detection unit 5 in step S1, the peripheral environment description unit 17 in the peripheral environment information output unit 13 transmits the received peripheral environment data.
  • Character string information described as a character string in a natural language is generated from the environment detection data. For example, when image data obtained by imaging the front of the vehicle by the surrounding environment detection unit 5 is sent as the surrounding environment detection data, the surrounding environment description unit 17 converts the surrounding situation indicated by the captured image into a character string, that is, converts the surrounding situation into a character string. Generate character string information described by a character string in a natural language. An example of the operation in step S2 will be described. FIG.
  • FIG. 5 is a schematic diagram illustrating an example of the running vehicle 38 and the surrounding environment of the vehicle 38.
  • the vehicle 38 is traveling on a general road 39 in the direction of the arrow, and two pedestrians 40 are at the corner of the right sidewalk near the intersection and one pedestrian 40 is at the left side corner near the intersection.
  • the peripheral environment description unit 17 that has received the image data of the front imaged from the vehicle 38 performs image recognition processing, and generates character string information representing the captured image in a natural language.
  • the image processing and the character string conversion are performed by using, for example, a convolutional neural network (CNN) or a recurrent neural network (RNN), which is a model of machine learning.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • the morpheme extraction unit 18 When the character string information generated by the peripheral environment description unit 17 in step S2 is output to the morpheme extraction unit 18, the morpheme extraction unit 18 performs morphological analysis on the sent character string information in step S3. Extract morphemes.
  • the morphological analysis is a process of segmenting a character string into morphemes. For example, when a character string "running on a general road and there are three pedestrians ahead" is morphologically analyzed, “” It is divided into morphemes “running”, “forward”, “ni”, “pedestrian”, “ga”, “3”, “people”, and “is”.
  • the morpheme extraction unit 18 outputs the extracted morpheme to the numerical vectorization unit 19.
  • step S4 the morpheme extracted by the morpheme extraction unit 18 in step S3 is converted into a numerical vector by the numerical vectorization unit 19.
  • FIG. 6 shows the morphemes 41 such as "" general road “,””,” running “,” forward “,” ni “,” pedestrian “,” ga “,” 3 “,” people “and” is "obtained in step S3.
  • 9 is a correspondence table showing correspondence between dimensions, morphemes, and obtained vector values when vectorized. Although the correspondence table of FIG. 6 shows up to a portion having 16 dimensions, it is considered that the greater the number of dimensions, the finer the granularity for expressing the situation. Further, in the example of FIG. 6, particles are not handled in correspondence with one dimension of a morpheme, but particles may be handled.
  • the numerical vectorization unit 19 outputs the generated numerical vector to the operation mode evaluation unit 15.
  • step S5 the driver state detecting unit 6 detects the driver's state with a biological sensor or a camera at a timing corresponding to when the surrounding environment detecting unit 5 detects the surrounding environment of the vehicle with various sensors in step S1.
  • detecting the driver's state there is a case where an electrocardiogram is obtained by an electrocardiograph and eyeball image data is obtained by a camera as described above.
  • the driver status detection unit 6 outputs the detected driver status information to the driver status information output unit 14 as driver status detection data.
  • step S6 the driver state feature amount extraction unit 20 in the driver state information output unit 14 uses the driver state detection data sent from the driver state detection unit 6 in step S5, and the living state of the driver is displayed. Extract the signal features. For example, the driver state characteristic amount extraction unit 20 extracts a heartbeat interval, a QRS width, and a QRS wave height as characteristic amounts from the electrocardiogram signal as the driver state detection data, and extracts an image of an eyeball as the driver state detection data. The pupil diameter is extracted from the data as a feature amount.
  • FIG. 7 shows an example of the extracted driver state feature quantity.
  • the driver state characteristic amount extraction unit 20 outputs the extracted driver state characteristic amount to the driving mode evaluation unit 15. In these steps S5 and S6, the operation of the processing for obtaining the driver state information is performed.
  • step S7 the driving mode evaluation unit 15 determines the numerical value vector input from the numerical value vectorizing unit 19 and the driver state characteristic input from the driver state characteristic amount extracting unit 20. Based on the amount and the driving mode judgment model stored in the judgment model storage unit 16, the evaluation values of the suitability of manual driving and automatic driving with respect to the situation where the surrounding environment information and the driver state information are obtained in steps S1 and S5 are calculated. I do.
  • FIG. 8 is a schematic diagram showing an example of the concept of an operation mode determination model used for calculating the evaluation value.
  • the driving mode determination model indicates the degree of influence on the automatic driving and the manual driving for each component of the numerical vector that is the surrounding environment information and each driver state feature amount that is the driver state information, and is shown in FIG.
  • the degree of influence is represented as a numerical value.
  • the degree of influence of each of the surrounding environment information and the driver state information indicated in the driving mode determination model on the automatic driving and the manual driving is represented by a numerical value.
  • an influence vector a unit in which the influence of each component of the numerical value vector and each driver state feature amount is integrated for each driving mode.
  • the driving mode evaluation unit 15 first combines the numerical value vector input from the numerical value vectorization unit 19 and the driver state characteristic amount input from the driver state characteristic amount extraction unit 20 to obtain both the surrounding environment and the driver state. Is generated.
  • the generated feature amount vector C is an (n + m) -dimensional vector.
  • the operation mode evaluation unit 15 calculates an evaluation value for each operation mode. Specifically, the inner product of the feature quantity vector and the influence vector is calculated according to equation (1).
  • the denominator in Expression (1) is a normalization constant. At this time, it is assumed that the feature value vector used for input has been converted into a binary value of 0 or 1.
  • X is the feature quantity vectors
  • X) is an evaluation value of Y i when X is input
  • W i is an influence vector of each operation mode.
  • the suffix i of Wi does not represent a vector component, but is merely a suffix for distinguishing between an influence vector for automatic operation and an influence vector for manual operation.
  • the formula for calculating the evaluation value for each driving mode is not limited to this, and may be any formula that reflects the degree of influence of each feature based on the driving mode determination model. It may be calculated.
  • FIG. 9 is a diagram showing an example in which the evaluation values of the suitability of the manual driving and the automatic driving for the combination of the surrounding environment information and the driver state information are calculated.
  • the evaluation value of the suitability of the manual driving is shown. Is calculated to be 0.2, and the evaluation value of the suitability for automatic driving is calculated to be 0.8.
  • the obtained evaluation value is output to the driving mode candidate selection unit 7.
  • step S ⁇ b> 8 the driving mode candidate selection unit 7 selects a driving mode candidate appropriate for the current surrounding environment and the driver state based on the evaluation value obtained by the driving mode evaluation unit 15.
  • a threshold value is set for the difference between the evaluation values of the automatic operation and the manual operation, and when the difference exceeds the threshold value, the operation mode having the larger evaluation value is set as an appropriate operation mode candidate. For example, when the threshold value is set to “0.5”, the evaluation value result in FIG. 9 exceeds the threshold value because the difference between the evaluation value of the automatic operation and the evaluation value of the manual operation is 0.6. Is selected as an appropriate driving mode candidate. If the threshold value is not exceeded, the currently applied operation mode is selected as a candidate.
  • the method of setting the threshold value is used, but more simply, the operation mode with the larger final evaluation value may be selected as an appropriate operation mode candidate.
  • step S9 a comparison is made as to whether the operation mode candidate selected by the operation mode candidate selection unit 7 in step S8 matches the current operation mode. This operation is also performed by the operation mode candidate selection unit 7. If they match, the process proceeds to step S10, in which the operation mode presentation unit 8 presents that the current operation mode is appropriate, and also issues a signal indicating that the current operation mode is to be continuously applied. Output to the unit 10. Thereafter, the process proceeds to step S11, and the operation control unit 10 receiving the signal instructing to continuously apply the current operation mode performs the operation control by continuously applying the current operation mode.
  • step S9 If the driving mode candidate does not match the current driving mode in step S9, the process proceeds to step S12, in which the driving mode candidate selecting unit 7 displays the content for urging the user to switch to the selected driving mode candidate. To be presented. When the selected driving mode candidate is presented to the driving mode presentation unit 8, the driver determines whether to switch to the presented driving mode candidate.
  • step S13 the operation control unit 10 waits whether the driver has issued an instruction to switch the operation mode via the operation mode input unit 9, and if there is an instruction to switch the operation mode, proceeds to step S14.
  • the operation control unit 10 performs operation control by switching the operation mode to the operation mode candidate selected in step S8. Further, when the driver does not give an instruction to switch the driving mode in step S13, for example, when an instruction to continue the current driving mode is input, or presents a content urging the switching of the driving mode candidate to the driving mode presentation unit 8. If nothing has been input for a certain period of time after this, the operation proceeds to step S11, where the operation control unit 10 continues to apply the current operation mode and performs operation control.
  • the driving support device 1, the driving control device 3, the surrounding environment detecting unit 5, and the driver state detecting unit 6 repeat the operations shown in steps S1 to S14 to perform driving support.
  • the vehicle communication unit 11 communicates with the server communication unit 12 at an arbitrary timing before step S ⁇ b> 7 and transmits a driving mode determination model from the information server 4. To get.
  • step S101 the surrounding environment description unit 23 generates character string information in which the surrounding environment is described by a character string in a natural language from the surrounding environment detection data stored in the learning data storage unit 22.
  • the generated character string information is output to the morpheme extraction unit 24.
  • step S102 the morpheme extraction unit 24 performs morpheme analysis on the character string information generated by the peripheral environment description unit 23 in step S101, and extracts morphemes.
  • the extracted morpheme is output to the numerical vectorization unit 25.
  • step S103 the morpheme obtained by the morpheme extraction unit 24 in step S102 is converted into a numeric vector by the numeric vectoring unit 25.
  • the converted numerical vector is output to the operation mode determination model generation unit 27.
  • step S104 the driver state characteristic amount extraction unit 20 extracts the biological signal characteristic amount of the driver state from the driver state detection data stored in the learning data storage unit 22.
  • the extracted driver state feature amount is output to the driving mode determination model generation unit 27.
  • step S105 based on the numerical value vector input from the numerical value vectorization unit 25, the driver state characteristic amount input from the driver state characteristic amount extraction unit 26, and the driving mode data input from the learning data storage unit 22,
  • the operation mode determination model generation unit 27 generates an operation mode determination model by a statistical method. For example, it can be generated using the maximum entropy method. Specifically, as shown in Equations 1 and 2, a parameter W i that characterizes the function form of the probability distribution function of the realization probability P is introduced, and the form of the probability distribution function realized by each operation mode is assumed, it is a method for optimizing the parameter W i. Thus a impact vector in the optimized W i is the operation mode determination model, realized probability P at that time is an evaluation value for each operation mode.
  • the probability distribution function of the realization probability of each of the automatic driving and the manual driving with respect to each component of the numerical vector that is the surrounding environment information and each feature amount of the driver state information is obtained, and the joint probability is used as an evaluation value for each driving mode.
  • the generated operation mode determination model is stored in the determination model storage unit 21. Steps S101 to S103 described above are the same operations as steps S2 to S4 in the flowchart described with reference to FIG. 4 as the operation of driving assistance by the driving assistance device 1 in FIG. 1, and similarly, the operation in step S104 is step S6. This is the same operation as.
  • the server communication unit 12 When the server communication unit 12 receives the request signal for the driving mode determination model from the vehicle communication unit 11, the server communication unit 12 outputs the driving mode determination model stored in the determination model storage unit 21 to the vehicle communication unit 11.
  • the driving mode determination model is provided to the driving support apparatus 1, and the driving support apparatus 1 stores the provided driving mode determination model in the determination model storage unit 16, thereby providing driving support. Can be used to evaluate the operation mode.
  • the operation mode determination model generation device 2 appropriately constructs the operation mode determination model from a large amount of learning data by a statistical method, and thereby each of the evaluation values of the automatic operation and the manual operation useful for selecting the operation mode. Can be calculated more appropriately.
  • the surrounding environment information is converted into a character string in a natural language by the surrounding environment description unit 17, and then the morpheme in the character string is extracted by the morpheme extracting unit 18.
  • the objects that appear in the surrounding environment of a vehicle are diverse, and the combinations and positional relationships of the objects often change.Therefore, when creating a model that represents the surrounding environment, setting the surrounding environment individually is extremely difficult. It is troublesome.
  • the surrounding environment set in the previous model can be expressed by a combination of morphemes defined in the natural language database without setting the surrounding environment individually. Work can be simplified.
  • the surrounding environment which could not be set by the person designing the model without thinking, can be set by combining the morphemes.
  • an existing natural language database may be used, or a natural language database may be newly created for an operation mode determination model.
  • the advantage of describing the surrounding environment with a character string in a natural language is not only that a natural language database can be used.
  • a knowledge structure database By performing logical estimation on a character string described in a natural language using a knowledge structure database, it is possible to add information on not only the surrounding environment acquired by the sensor but also a situation outside the sensor range. For example, consider a case where a soccer ball is captured in an image acquired by a camera. When a soccer ball is shown in the picture, it is possible that a child is playing nearby outside the image, and if the child suddenly jumps out of the way to get the soccer ball, it will collide with the vehicle there is a possibility.
  • the logic estimation using the knowledge structure database makes it possible to infer the possibility that a child will jump out of the image of the soccer ball, and presents it to the driver so that the driver can avoid collisions in advance. You can drive.
  • inference and information presentation are performed in the following procedure.
  • a character string "soccer ball” is described. If the character string "soccer ball” is associated with "child” in the knowledge structure database, information based on "child” is presented, and there is a possibility that "child” is invisible to the driver The driver should carefully check that the child does not jump out. If it is recognized from further detailed information, for example, that an image or a moving image indicates that "soccer ball is rolling down the road”, a character string “soccer ball is rolling down the road” is described. Then, as described above, it can be inferred that the “child” will “jump out” from the “rolling” of the “soccer ball”, and more detailed information can be provided to the driver.
  • the value of the component of the numerical vector corresponding to the specific morpheme becomes 1 and the evaluation value fluctuates, so that the driving mode more suitable for the surrounding environment can be evaluated. For example, recognizing that a soccer ball is rolling down the road from an image or video, etc., and describing it in a character string, it can be said that a child jumps out because a soccer ball is rolling. Can be guessed, the possibility is high.
  • the morphemes of "child” and “jump” are added by inference, and the components of the numerical vector corresponding to "child” and "jump"
  • the value of is 1, for example, while the evaluation value of manual driving decreases, the evaluation value of automatic driving increases, and even if the driver does not notice sudden child jumping out or the reaction of the driver is likely to be delayed, By automatically driving, the possibility of collision between the vehicle and the child can be avoided.
  • the driving support device 1 is configured to include the judgment model storage unit 16, but the driving support device 1 does not include the judgment model storage unit 16, and the driving mode evaluation unit 15 is
  • the configuration may be such that the judgment model storage unit 21 in the information server 4 is appropriately referred to by the communication via the server communication unit 12 and.
  • the driving support apparatus in the present embodiment uses the driving mode determination model to calculate an evaluation value indicating the suitability of the automatic driving and the manual driving for the combination of the surrounding environment information and the driver state information.
  • An evaluation value indicating the suitability of the automatic driving and the manual driving may be calculated based on the combination of the information and the driver state information, and a configuration without using the driving mode determination model may be used. For example, in the surrounding environment information, if the number of pedestrians increases by one, the evaluation value of automatic driving is increased by 1, while the evaluation value of manual driving is decreased by 1. For example, a threshold value is set for the heart rate in the driver state information. If the threshold value is exceeded, the evaluation value of the automatic driving is increased by 10, and so on.
  • the information server 4 includes a learning data adding unit 42. According to the driver's judgment input to the driving mode input unit 9, By changing whether or not the learning data is added by the learning data adding unit 42, the accuracy of the judgment model is improved.
  • FIG. 11 is a configuration diagram showing a configuration of a driving assistance device 1 and a driving mode determination model generation device 2 according to a second embodiment for carrying out the present invention. It is similar.
  • the driving mode input unit 9 outputs a signal indicating that the driving mode is not switched when the driver does not switch to the presented driving mode.
  • the data is output to the learning data adding unit 42 via the communication unit 11 and the server communication unit 12.
  • the time when the driver does not switch to the presented driving mode is not only when the driver has input to the driving mode input unit 9 to continue running in the current driving mode, but also This includes the case where there is no input from the driver to the operation mode input unit 9 for a certain period of time after the content prompting the switching of the operation mode is presented to the operation mode presentation unit 8.
  • the learning data is collected from the driver state information output unit 14 and the driving control unit 10, and the learning data is added to the learning data storage unit 22.
  • the learning data to be added is, for example, a data set of the surrounding environment detection data, the driver state detection data, and the driving mode data.
  • the driving mode data is not a driving mode candidate presented by the driving mode presentation unit 8 but a current one. This is operation mode data as operation mode information in which an operation mode being applied is recorded.
  • the reason is that the case where the driver has not switched the driving mode is determined by the driver that the driving mode candidate presented by the driving mode presenting unit 8 is not appropriate for the current surrounding environment and driver state.
  • the driving mode data corresponding to the surrounding environment detection data and the driver state detection data used to select the presented driving mode, that is, the learning mode data to be added includes the presented driving mode. This is because the candidates are inappropriate and the currently applied operation mode is appropriate.
  • the learning data adding unit 42 includes the peripheral environment detection data from the peripheral environment description unit 17 provided in the peripheral environment information output unit 13 and the driver state characteristics provided in the driver state information output unit 14.
  • Driver state detection data is obtained from the quantity extraction unit 20.
  • the surrounding environment description section 17 includes a memory for temporarily storing the surrounding environment detection data input from the surrounding environment detecting section 5.
  • the input of the surrounding environment detection data is given an identifier indicated by “No.” in FIG. 2, and when the driver does not switch to the presented driving mode, the learning data adding unit 42 Can be read from the peripheral environment description unit 17 by designating the corresponding peripheral environment detection data using this identifier.
  • the driver state characteristic amount extraction unit 20 is provided with a memory for temporarily storing the driver state detection data input from the driver state detection unit 6, and the input driver state right data includes the memory shown in FIG. The identifier indicated by “No.” is given.
  • the learning data adding unit 42 specifies the corresponding driver state detection data using this identifier, and thereby the driver state feature amount extracting unit 20. Can be read from
  • the learning data addition unit 42 acquires from the operation control unit 10 operation mode data as operation mode information indicating the current operation mode.
  • the current operation mode is an operation mode that is already applied in the operation control unit 10 when the driver switches or inputs the operation mode.
  • the learning data adding unit 42 converts the surrounding environment detection data from the surrounding environment description unit 17, the driver state detection data from the driver state feature amount extraction unit 20, and the driving mode data from the driving control unit 10.
  • the configuration for data acquisition is not limited to the above as long as each data can be acquired.
  • the surrounding environment detection data from the surrounding environment detection unit 5 and the driver state detection data from the driver state detection unit 6 Thus, the configuration may be such that the operation mode data is acquired from the operation mode candidate selection unit 7.
  • the configurations of the driving control device 3 and the information server 4 other than those described above in the present embodiment, and the configurations of the surrounding environment detection unit 5 and the driver state detection unit 6 are the same as those in the first embodiment.
  • the driving support device 1 and the driving mode determination model generation device 2 in the embodiment are configured as described above.
  • the hardware configuration in the present embodiment is the same as that shown in FIG. 3, and the learning data adding unit 42 added in the second embodiment executes the program stored in the storage device 31 by the processing device 29.
  • This function is not limited to the combination of hardware and software, and the above-described program may be implemented in the processing device 29 and may be realized by hardware alone.
  • the learning data addition unit 42 collects learning data via the vehicle communication unit 11 and the server communication unit 12.
  • the learning data in the present embodiment is a data set of surrounding environment detection data, driver state detection data, and operation mode data representing the current operation mode.
  • the surrounding environment detection data is output from the surrounding environment description unit 17, the driver state detection data is output from the driver state feature amount extraction unit 20, and the operation mode data is output from the operation control unit 10.
  • step S202 the learning data addition unit 42 outputs the learning data collected in step S201 to the learning data storage unit 22, and the learning data storage unit 22 stores the input learning data.
  • step S202 the process proceeds to step S11.
  • the operation in step S11 is the same as the operation in the first embodiment.
  • the data set when the driving mode candidates that do not match the driver's judgment are presented are set together with the driving mode determined by the driver, and the learning data is used as the learning data.
  • the accuracy of the driving mode determination model generated next time can be improved, and driving assistance more suitable for the driver can be performed.
  • the surrounding environment detection data and the driver state detection data are assumed as the learning data to be added, but the surrounding environment information and the driver state information at the timing when the driver refuses to switch the driving mode are described. As long as it is a data set, it is not necessary to use these detection data.
  • a numerical vector may be stored as surrounding environment information
  • driver state feature amount information may be stored as driver state information as a data set.
  • the driving support device and the driving mode determination model generation device are applicable to an automatic driving system for a vehicle.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Traffic Control Systems (AREA)

Abstract

Provided is a driving assistance device capable of assisting in the appropriate assessment of a driving mode taking into consideration both the driver status and the vehicle surrounding environment. Also provided is a driving mode assessment model generation device for generating a driving mode assessment model suitable for use in the driving assistance device. A driving assistance device 1 is equipped with: a surrounding environmental information output unit 13 for obtaining vehicle surrounding environmental information; a driver status information output unit 14 for obtaining driver status information corresponding to the surrounding environmental information; and a driving mode evaluation unit 15 for calculating an evaluation value representing the suitability of automatic driving and manual driving for a combination of the surrounding environmental information and the driver status information, the calculation carried out on the basis of such combination.

Description

運転支援装置および運転モード判断モデル生成装置Driving support device and driving mode judgment model generation device
本発明は、自動運転の機能を備えた車両において、状況に適した運転モードの設定を支援する運転支援装置、およびこの運転支援装置に用いる運転モード判断モデルを生成する装置に関する。 The present invention relates to a driving support device that supports setting of a driving mode suitable for a situation in a vehicle having an automatic driving function, and a device that generates a driving mode determination model used for the driving support device.
自動運転の機能を備えた車両を走行させる際、多様な交通状況に対して自動運転と手動運転を適切に切換えること、つまり、適切な運転モードを選択することが必要である。 When a vehicle having an automatic driving function is driven, it is necessary to appropriately switch between automatic driving and manual driving in various traffic conditions, that is, to select an appropriate driving mode.
適切な運転モードを選択する技術として、例えば特許文献1には、自動運転中の車両において、ドライバの行為とドライバの状態を検知した結果に基づき、ドライバが手動運転に復帰可能か否かを判定する手法が開示されている。同様に、特許文献1には自動運転中の車両において、車両の走行位置情報、地図データ及び車両走行情報に基づき、自動運転が継続可能か否かの判定をする手法が開示されている。 As a technique for selecting an appropriate driving mode, for example, in Patent Document 1, it is determined whether a driver can return to manual driving based on a result of detecting a driver's action and a driver's state in a vehicle that is being driven automatically. A technique for performing this is disclosed. Similarly, Patent Literature 1 discloses a method of determining whether or not automatic driving can be continued in a vehicle that is being automatically driven based on travel position information, map data, and vehicle travel information of the vehicle.
特開2018-5343号公報JP 2018-5343 A
 特許文献1では、運転者の状態のみに基づいて手動運転に復帰できるか否かの判断が行われているが、運転者の状態が同じでも車両の周辺環境が異なると手動運転の可否判断が異なることがあり得る。例えば、運転者が少し疲れている状態であった場合、まっすぐな道で歩行者や他の車両がほとんど存在しない状況では手動運転することは可能だが、複雑な車線分岐があり、歩行者や他の車両がたくさん存在する状況では、運転者が細心の注意を払える状態でないと手動運転することは難しい。同様に、自動運転の可否判断も、車両の周辺環境だけでなく、運転者の状態に依存することがある。
 このように、自動運転、手動運転の可否判断、すなわち運転モードの判断は、運転者状態と車両周辺環境の組み合わせに基づいて行う事が望ましいが、従来は手動運転であれば運転者状態のみ、自動運転であれば車両周辺環境のみに基づいて行われていた。
In Patent Literature 1, it is determined whether or not it is possible to return to the manual driving based only on the driver's state. It can be different. For example, if the driver is a little tired, it is possible to drive manually on a straight road with few pedestrians or other vehicles, but there are complicated lane branches and pedestrians and other In a situation where many vehicles exist, it is difficult to manually drive the vehicle unless the driver can pay close attention. Similarly, the determination as to whether or not to allow automatic driving may depend on not only the surrounding environment of the vehicle but also the state of the driver.
As described above, it is desirable to determine whether or not automatic driving and manual driving are possible, that is, to determine the driving mode based on a combination of the driver's state and the surrounding environment of the vehicle. In the case of automatic driving, the driving is performed based only on the environment around the vehicle.
 本発明は、上述のような課題を解決するためになされたもので、第1の目的は、運転者の状態と車両の周辺環境との両方を踏まえて適切に運転モードを判断するための支援を行うことができる運転支援装置を得るものである。
 また、第2の目的は、この運転支援装置に用いるのに適した運転モード判断モデルを生成する運転モード判断モデル生成装置を得るものである。
SUMMARY OF THE INVENTION The present invention has been made to solve the above-described problems, and a first object of the present invention is to provide support for appropriately determining a driving mode based on both a driver's state and a surrounding environment of a vehicle. It is intended to obtain a driving support device capable of performing the following.
A second object is to provide a driving mode determination model generation device that generates a driving mode determination model suitable for use in the driving support device.
本発明に係る運転支援装置においては、車両の周辺環境情報を出力する周辺環境情報出力部と、周辺環境情報に対応した運転者の状態情報を出力する運転者状態情報出力部と、周辺環境情報と運転者状態情報の組み合わせに基づいて、その組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出する運転モード評価部とを備えたものである。 In the driving support device according to the present invention, a surrounding environment information output unit that outputs surrounding environment information of a vehicle, a driver state information output unit that outputs state information of a driver corresponding to the surrounding environment information, and a surrounding environment information And a driving mode evaluation unit that calculates an evaluation value indicating the suitability of the automatic driving and the manual driving for the combination based on the combination of the information and the driver state information.
本発明の装置は、車両の周辺環境情報と運転者状態情報の組み合わせに基づいて、その組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出するため、この評価値を利用して、周辺環境と運転者状態に応じた適切な運転モードを決定することを支援できる。 The device of the present invention uses this evaluation value to calculate an evaluation value representing the suitability of automatic driving and manual driving for the combination based on the combination of the surrounding environment information of the vehicle and the driver state information, It can assist in determining an appropriate driving mode according to the surrounding environment and the driver's condition.
実施の形態1に係る運転支援装置と運転モード判断モデル生成装置の構成を示す構成図FIG. 1 is a configuration diagram illustrating a configuration of a driving support device and a driving mode determination model generation device according to a first embodiment. 運転モード判断モデル生成装置に設けられた学習用データ記憶部が記憶する学習用データを例示する概要図Schematic diagram illustrating learning data stored in a learning data storage unit provided in the driving mode determination model generation device. 実施の形態1に係る運転制御装置、情報サーバー、周辺環境検知部、運転者状態検知部のハードウェア構成を示す構成図FIG. 1 is a configuration diagram illustrating a hardware configuration of a driving control device, an information server, a surrounding environment detection unit, and a driver state detection unit according to Embodiment 1. 実施の形態1に係る運転支援装置を備えた運転制御装置の動作を示すフローチャート4 is a flowchart showing the operation of the driving control device including the driving support device according to the first embodiment. 実施の形態1に係る実施例における車両の周辺環境を例示する概要図Schematic diagram illustrating the surrounding environment of a vehicle in an example according to the first embodiment. 実施の形態1に係る運転支援装置に設けられた数値ベクトル化部が生成する数値ベクトルを例示する概要図Schematic diagram illustrating a numerical vector generated by a numerical vectorization unit provided in the driving support device according to the first embodiment. 実施の形態1に係る運転支援装置に設けられた運転者状態特徴量抽出部が抽出する運転者特徴量を例示する概要図Schematic diagram illustrating driver feature values extracted by a driver state feature value extraction unit provided in the driving support device according to the first embodiment. 実施の形態1に係る運転支援装置に設けられた判断モード記憶部が記憶する運転モード判断モデルを例示する概要図Schematic diagram illustrating a driving mode judgment model stored in a judgment mode storage unit provided in the driving support device according to the first embodiment. 実施の形態1に係る運転支援装置に設けられた運転モード評価部が算出する評価値を例示する概要図Schematic diagram illustrating an evaluation value calculated by a driving mode evaluation unit provided in the driving support device according to the first embodiment. 実施の形態1に係る運転モード判断モデル生成装置を備えた情報サーバーの動作を示すフローチャート5 is a flowchart illustrating the operation of the information server including the driving mode determination model generation device according to the first embodiment. 実施の形態2に係る運転支援装置と運転モード判断モデル生成装置の構成を示す構成図Configuration diagram showing configurations of a driving support device and a driving mode determination model generation device according to Embodiment 2. 実施の形態2に係る運転支援装置を備えた運転制御装置と運転モード判断モデル生成装置を備えた情報サーバーの動作を示すフローチャート5 is a flowchart showing the operation of an operation control device including a driving assistance device according to Embodiment 2 and an information server including an operation mode determination model generation device.
実施の形態1.
 図1は、本発明を実施するための実施の形態1における運転支援装置1と運転モード判断モデル生成装置2の構成を示す構成図である。運転支援装置1は車両の運転制御を行う運転制御装置3の一部として設けられ、また、運転モード判断モデル生成装置2は適宜車両に対して種々の情報サービスを提供する情報サーバー4の一部として設けられている。本実施の形態1では、運転制御装置3が車両に搭載されており、情報サーバー4は任意の場所に設置され、通信回線を介して車両に搭載された運転制御装置3と接続される。
Embodiment 1 FIG.
FIG. 1 is a configuration diagram showing a configuration of a driving support device 1 and a driving mode determination model generation device 2 according to Embodiment 1 for carrying out the present invention. The driving support device 1 is provided as a part of a driving control device 3 that controls driving of a vehicle, and the driving mode determination model generating device 2 is a part of an information server 4 that appropriately provides various information services to the vehicle. It is provided as. In the first embodiment, the operation control device 3 is mounted on a vehicle, and the information server 4 is installed at an arbitrary location and connected to the operation control device 3 mounted on the vehicle via a communication line.
 運転制御装置3は車両の運転制御全般を行う運転制御部10を備えており、この運転制御部10が車両の自動運転と手動運転の切り換えも含め、入力される情報をもとに車両の加減速、ステアリング等の駆動系機器を制御して運転制御を行う。
 また、運転制御装置3は、本発明に係わる運転モードを判断するための支援に関する構成として、運転支援装置1、運転モード候補選択部7、運転モード提示部8、運転モード入力部9、車両通信部11を備えている。
 さらに、運転支援装置1には、周辺環境検知部5と運転者状態検知部6が通信回線を介して接続されている。
The driving control device 3 includes a driving control unit 10 that performs overall driving control of the vehicle. The driving control unit 10 performs control of the vehicle based on input information, including switching between automatic driving and manual driving of the vehicle. Operation control is performed by controlling drive system devices such as deceleration and steering.
In addition, the driving control device 3 includes a driving support device 1, a driving mode candidate selection unit 7, a driving mode presenting unit 8, a driving mode input unit 9, a vehicle communication, as a configuration related to support for determining a driving mode according to the present invention. A section 11 is provided.
Further, a surrounding environment detection unit 5 and a driver state detection unit 6 are connected to the driving support device 1 via a communication line.
 周辺環境検知部5は、車両に搭載されるか、あるいは車両が走行する市街地や道路に設置され、車両周辺の人、車両、構造物等の有無や位置といった周辺環境を検知し、運転支援装置1に伝えるものである。具体的には、周辺環境検知部5は、車両の周辺環境情報を検知し、検知した周辺環境を表す周辺環境検知データを周辺環境情報出力部13へ出力するものであり、複数あってもよい。周辺環境検知部5が検知する周辺環境の例としては、歩行者や他の車両、道路上に存在する構造物、障害物等があり、周辺環境検知部5はこれらを検知する各種センサー、例えば、カメラ、Lidar(Light Detection and Ranging)、Radar(Radio Detection and Ranging)等と、センサーで検知した情報を周辺環境検知データとして周辺環境情報出力部13へ出力する通信装置とで構成される。この周辺環境検知部5は、車両の周辺環境を検知できればよく、車両に搭載されていてもよいし、道路に設置された信号機等の施設に設置されていてもよい。例えば、周辺環境検知部5が備えるセンサーが車両に搭載されたカメラであれば、その車両から周辺を撮像した画像情報を得ることができ、センサーが信号機に設置されたLidar、Radar等であれば、車両が近づいた信号機の周辺の歩行者、他の走行車両、障害物等の検知情報、特に車両に設置されたカメラからは他の物体に隠れて撮像できない対象の検知情報も得ることができる。 The surrounding environment detection unit 5 is mounted on a vehicle or installed in an urban area or a road on which the vehicle runs, and detects a surrounding environment such as the presence or absence and position of a person, a vehicle, a structure, or the like around the vehicle, and provides a driving assistance device. 1 Specifically, the surrounding environment detection unit 5 detects surrounding environment information of the vehicle, and outputs surrounding environment detection data representing the detected surrounding environment to the surrounding environment information output unit 13, and a plurality of surrounding environment detection units may be provided. . Examples of the surrounding environment detected by the surrounding environment detecting unit 5 include pedestrians and other vehicles, structures existing on roads, obstacles, and the like. The surrounding environment detecting unit 5 includes various sensors that detect these, for example, , A camera, Lidar (Light Detection and Ranging), Radar (Radio Detection and Ranging), and a communication device that outputs information detected by the sensor to the peripheral environment information output unit 13 as peripheral environment detection data. The surrounding environment detection unit 5 only needs to be able to detect the surrounding environment of the vehicle, and may be mounted on the vehicle or may be installed in a facility such as a traffic light installed on a road. For example, if the sensor provided in the surrounding environment detection unit 5 is a camera mounted on a vehicle, image information obtained by imaging the surroundings from the vehicle can be obtained. If the sensor is a Lidar, Radar, or the like installed on a traffic signal, It is also possible to obtain detection information of pedestrians, other traveling vehicles, obstacles, and the like around a traffic signal near a vehicle, and in particular, detection information of a target that cannot be imaged by being hidden by another object from a camera installed in the vehicle. .
 また、運転者状態検知部6は車両に搭載され、運転者の状態を検知して運転支援装置1に伝えるものである。具体的には、運転者状態検知部6は、運転者の状態を検知し、検知した運転者状態を表す運転者状態検知データを運転者状態情報出力部14へ出力するものであり、複数種類あってもよい。運転者状態検知部6が取得する運転者状態の例としては、心電、筋電、眼球運動、脳波、呼吸、圧力、発汗等であり、運転者状態検知部6はこれらを検知するための生体センサーやカメラと、検知した情報を運転者状態検知データとして運転者状態情報出力部14へ出力する通信装置とで構成される。 The driver status detection unit 6 is mounted on the vehicle, detects the status of the driver, and informs the driving support device 1 of the detected status. Specifically, the driver status detection unit 6 detects the status of the driver, and outputs driver status detection data representing the detected driver status to the driver status information output unit 14. There may be. Examples of the driver state acquired by the driver state detection unit 6 include electrocardiogram, myoelectricity, eye movement, brain wave, breathing, pressure, sweating, and the like. The driver state detection unit 6 detects these. It is composed of a biosensor and a camera, and a communication device that outputs detected information to the driver status information output unit 14 as driver status detection data.
 運転支援装置1は、自動運転の機能を備えた車両を走行させる際、運転者状態検知部6から伝えられた運転者の状態と周辺環境検知部5から伝えられた車両の周辺環境とを踏まえて自動運転と手動運転を適切に切換えること、つまり、適切な運転モードを選択することを支援する装置であり、具体的には、現状に対応して自動運転と手動運転それぞれの適性度を表す評価値を算出して提示するものである。算出された評価値は、運転者の状態と車両の周辺環境とを踏まえて算出されるので、運転者あるいは運転モードを制御するシステムが運転モードを決定する際の有用な判断材料となる。なお、運転支援装置1の詳細な構成は後述する。 When driving a vehicle having an automatic driving function, the driving support device 1 takes into account the driver's state transmitted from the driver state detection unit 6 and the surrounding environment of the vehicle transmitted from the surrounding environment detection unit 5. A device that appropriately switches between automatic operation and manual operation, that is, a device that assists in selecting an appropriate operation mode. Specifically, it represents the suitability of each of automatic operation and manual operation according to the current situation. The evaluation value is calculated and presented. Since the calculated evaluation value is calculated based on the driver's state and the surrounding environment of the vehicle, the evaluation value is a useful judgment material when the driver or the system controlling the driving mode determines the driving mode. The detailed configuration of the driving support device 1 will be described later.
 運転モード候補選択部7は運転支援装置1により算出された運転モードの評価値に基づいて、適切な運転モードの候補を選択するものであり、選択された運転モード候補と運転制御部10で適用されている現在の運転モードを比較し、一致する場合は現在の運転モードが適切である旨を運転支援情報として運転モード提示部8に提示させ、一致しない場合には、運転支援情報として選択された運転モード候補への切り替えを促す内容を運転モード提示部8に提示させる。運転モード提示部8は、例えば、音声で出力するスピーカー、画面表示を行うディスプレイ等で構成される。
 また、運転モード候補選択部7は、車両が道路施設などに急接近している場合等、運転モードの早急な切り替えが必要な場合には、運転制御部10へ強制的な切り替えを推奨する信号を出力するように構成されており、これにより運転制御部10が強制的に運転モードの切り替えを行うことを可能としている。
The driving mode candidate selection unit 7 selects an appropriate driving mode candidate based on the evaluation value of the driving mode calculated by the driving support device 1. The selected driving mode candidate and the driving control unit 10 apply the selected driving mode candidate. The current driving modes are compared, and if they match, the fact that the current driving mode is appropriate is presented to the driving mode presentation unit 8 as driving support information, and if they do not match, they are selected as driving support information. The driving mode presenting unit 8 presents the content that prompts the user to switch to the driving mode candidate. The driving mode presenting unit 8 includes, for example, a speaker that outputs audio, a display that displays a screen, and the like.
The driving mode candidate selecting unit 7 also provides a signal for recommending forcible switching to the driving control unit 10 when urgent switching of the driving mode is required, for example, when the vehicle is rapidly approaching a road facility or the like. Is output, whereby the operation control unit 10 can forcibly switch the operation mode.
 運転モード入力部9は、運転者が運転モード提示部8で提示された運転モード候補を踏まえ、運転モードの切り替えを行うか否かを入力するものである。ここで運転者が運転モードの切り替えを行うことを入力した場合、運転モード候補への切り替えを示す信号が運転制御部10へ出力される。 The driving mode input unit 9 is for inputting whether or not the driver should switch the driving mode based on the driving mode candidates presented by the driving mode presentation unit 8. Here, when the driver inputs to switch the driving mode, a signal indicating the switching to the driving mode candidate is output to the driving control unit 10.
 運転制御部10は、運転モード入力部9から出力された信号が運転モードの切り替えを行うことを示す場合、運転モードの切り替えを行い、切り替えられた運転モードに応じた運転制御を行う。なお、運転制御部10は、運転モード候補選択部7から強制的な切り替えを推奨する信号に基づいて、強制的に運転モードの切り替えを行うようにしてもよい。 (4) When the signal output from the operation mode input unit 9 indicates that the operation mode is to be switched, the operation control unit 10 switches the operation mode and performs operation control according to the switched operation mode. The operation control unit 10 may forcibly switch the operation mode based on a signal from the operation mode candidate selection unit 7 that recommends the forcible switching.
 車両通信部11は、情報サーバー4に設けられたサーバー通信部12と通信を行い、運転制御装置3と情報サーバー4との間の情報の授受を行うものである。本実施の形態では、特に、運転モード判断モデル生成装置2で生成された運転モード判断モデルを取得し、判断モデル記憶部16に出力する構成を示している。 The vehicle communication unit 11 communicates with the server communication unit 12 provided in the information server 4, and exchanges information between the operation control device 3 and the information server 4. In the present embodiment, particularly, a configuration is shown in which the operation mode determination model generated by the operation mode determination model generation device 2 is acquired and output to the determination model storage unit 16.
 運転制御装置3は以上のように構成されている。さらにこの運転制御装置3に含まれている本発明の運転支援装置1の構成を説明する。 The operation control device 3 is configured as described above. Further, the configuration of the driving support device 1 of the present invention included in the driving control device 3 will be described.
 図1に示すように、運転支援装置1は、周辺環境情報出力部13、運転者状態情報出力部14、運転モード評価部15、判断モデル記憶部16を備える。 As shown in FIG. 1, the driving support device 1 includes a surrounding environment information output unit 13, a driver state information output unit 14, a driving mode evaluation unit 15, and a judgment model storage unit 16.
 周辺環境情報出力部13は、周辺環境検知部5より得られた車両の周辺環境検知データに基づいて、運転モードの判断に適した形式の周辺環境情報を生成し、運転モード評価部15に出力するものである。より詳細には、周辺環境情報出力部13は、周辺環境記述部17、形態素抽出部18、数値ベクトル化部19を備える。 The surrounding environment information output unit 13 generates surrounding environment information in a format suitable for determining the driving mode based on the surrounding environment detection data of the vehicle obtained from the surrounding environment detecting unit 5, and outputs the generated surrounding environment information to the driving mode evaluation unit 15. Is what you do. More specifically, the peripheral environment information output unit 13 includes a peripheral environment description unit 17, a morpheme extraction unit 18, and a numerical vectorization unit 19.
 周辺環境記述部17は、周辺環境検知部5で得られた周辺環境検知データを用いて、歩行者や他の車両、障害物等を認識し、周辺環境を自然言語による文字列で記述した文字列情報を生成するものである。生成された文字列情報は、形態素抽出部18へと出力する。 The surrounding environment description unit 17 uses the surrounding environment detection data obtained by the surrounding environment detection unit 5 to recognize pedestrians, other vehicles, obstacles, and the like, and to describe the surrounding environment as a character string in a natural language. This is for generating column information. The generated character string information is output to the morpheme extraction unit 18.
 形態素抽出部18は、周辺環境記述部17で生成された文字列情報に形態素解析を行い、形態素を抽出し、抽出された形態素を数値ベクトル化部19へ出力するものである。ここで、形態素解析とは、テキストを言語で意味を持つ最小単位である形態素に区切る言語処理技術である。例えば、「前方に歩行者が3人いる」という文字列に対しては、「『前方』『に』『歩行者』『が』『3』『人』『いる』」と区切られる。 The morpheme extraction unit 18 performs a morpheme analysis on the character string information generated by the peripheral environment description unit 17, extracts morphemes, and outputs the extracted morphemes to the numerical vectorization unit 19. Here, the morphological analysis is a language processing technique that divides a text into morphemes, which are minimum units having a meaning in a language. For example, a character string "three pedestrians ahead" is divided into "", "", "", "", "", "", "".
 数値ベクトル化部19は、形態素抽出部18で抽出された形態素を数値ベクトル化するものである。本実施の形態では、数値ベクトル化の方法としてOne-Hotモデルを用いる。One-Hotモデルとは、コーパスの単語数をベクトルの次元数とし、各次元に単語が出現したら1と表現し、出現しなかったら0と表現して示すモデルである。他のベクトル化の方法として、例えば、Word2Vector等を使用することが出来る。
 本実施の形態では、数値ベクトル化部19で形態素を数値ベクトルへ変換しているが、これは形態素を数値化することが目的である。そのため、数値化の方法はベクトル化に限らず、例えば、スカラー、あるいは行列等、より高階のテンソルに変換する方法を用いてもよい。
 数値ベクトル化部19で得られたベクトル化した形態素は、周辺環境情報として、運転モード評価部15に出力される。
The numerical vectorization unit 19 converts the morpheme extracted by the morpheme extraction unit 18 into a numerical vector. In the present embodiment, the One-Hot model is used as a method for converting into a numerical vector. The One-Hot model is a model in which the number of words in the corpus is the number of dimensions of a vector, and the word is expressed as 1 when a word appears in each dimension, and is expressed as 0 when no word appears in each dimension. As another vectorization method, for example, Word2Vector or the like can be used.
In the present embodiment, the morpheme is converted into a numeric vector by the numeric vectorization unit 19, but this is for the purpose of digitizing the morpheme. Therefore, the method of numerical conversion is not limited to vectorization, and for example, a method of converting into a higher-order tensor such as a scalar or a matrix may be used.
The vectorized morpheme obtained by the numerical vectorization unit 19 is output to the driving mode evaluation unit 15 as surrounding environment information.
 また、運転者状態情報出力部14は、運転者状態検知部6より得られた運転者検知データに基づいて、運転モードの判断に適した形式の運転者状態情報を生成し、運転モード評価部15に出力するものである。より詳細には、運転者状態情報出力部14は運転者状態特徴量抽出部20を備える。
 運転者状態特徴量抽出部20は、運転者状態検知部6で得られた運転者状態検知データを用いて、運転モード評価部15で使用する運転者状態の生体信号特徴量を抽出するものである。生体信号特徴量の例としては、例えば、運転者状態検知データとして得られた心電信号からは生体信号特徴量として心拍間隔、QRS幅、QRS波高さが得られ、運転者状態検知データとして得られる眼球を撮像した画像データからは生体信号特徴量として瞳孔径が得られる。
The driver status information output unit 14 generates driver status information in a format suitable for determining a driving mode based on the driver detection data obtained from the driver status detection unit 6, and outputs the driving mode evaluation unit. 15 is output. More specifically, the driver status information output unit 14 includes a driver status feature amount extraction unit 20.
The driver state characteristic amount extraction unit 20 extracts the biological signal characteristic amount of the driver state used by the driving mode evaluation unit 15 using the driver state detection data obtained by the driver state detection unit 6. is there. As an example of the biological signal characteristic amount, for example, a heartbeat interval, a QRS width, and a QRS wave height are obtained as a biological signal characteristic amount from an electrocardiogram signal obtained as driver state detection data, and are obtained as driver state detection data. The pupil diameter is obtained from the image data of the obtained eyeball as a biological signal feature amount.
 そして、運転モード評価部15は、周辺環境情報出力部13から出力される周辺環境情報、運転者状態情報出力部14から出力される運転者状態情報、および判断モデル記憶部16に記憶された運転モード判断モデルに基づいて、手動運転と自動運転に対する適性度を表す評価値を算出し、その評価値を運転モード候補選択部7に出力するものである。ここで、運転モード判断モデルとは、車両の周辺環境と運転者の状態それぞれの、自動運転と手動運転それぞれに対する影響度を示すものであり、この運転モード判断モデルを適切に構築することで、適切な運転モードの選択に有用な自動運転と手動運転それぞれの評価値をより適切に算出することが可能となるものである。運転モード判断モデルは、用意された学習用データ等を用いて予め生成され、判断モデル記憶部16に記憶されている。 Then, the driving mode evaluation unit 15 outputs the surrounding environment information output from the surrounding environment information output unit 13, the driver state information output from the driver state information output unit 14, and the driving stored in the judgment model storage unit 16. Based on the mode determination model, an evaluation value representing a degree of suitability for manual operation and automatic operation is calculated, and the evaluation value is output to the operation mode candidate selection unit 7. Here, the driving mode determination model indicates the degree of influence of the surrounding environment of the vehicle and the state of the driver on each of the automatic driving and the manual driving, and by appropriately constructing this driving mode determination model, It is possible to more appropriately calculate the evaluation values of the automatic operation and the manual operation that are useful for selecting an appropriate operation mode. The operation mode determination model is generated in advance using prepared learning data and the like, and is stored in the determination model storage unit 16.
 判断モデル記憶部16は、車両通信部11がサーバー通信部12との通信を介して情報サーバー4から取得した運転モード判断モデルを記憶するものである。 The judgment model storage unit 16 stores the driving mode judgment model acquired by the vehicle communication unit 11 from the information server 4 through communication with the server communication unit 12.
 運転支援装置1は、以上のように構成されている。
 次に、本発明の運転モード判断モデル生成装置2について説明を行う。上述のように、運転支援装置1で用いられる運転モード判断モデルは、予め生成されて判断モデル記憶部16に記憶される。この運転モード判断モデルを生成するものが運転モード判断モデル生成装置2である。本実施の形態では、運転モード判断モデル生成装置2は、車両に対して種々の情報サービスを提供する情報サーバー4に設けられており、生成した運転モード判断モデルは、適宜、各車両の運転制御装置3に提供される。
The driving support device 1 is configured as described above.
Next, the operation mode determination model generation device 2 of the present invention will be described. As described above, the driving mode determination model used in the driving support device 1 is generated in advance and stored in the determination model storage unit 16. The operation mode determination model generation device 2 generates the operation mode determination model. In the present embodiment, the driving mode determination model generation device 2 is provided in the information server 4 that provides various information services to vehicles, and the generated driving mode determination model appropriately controls driving of each vehicle. Provided to the device 3.
 情報サーバー4は、運転モード判断モデル生成装置2に加え、判断モデル記憶部21、サーバー通信部12を備えている。なお、情報サーバー4は車両に対して運転モード判断モデル以外の種々の交通情報を提供するものであるが、本実施の形態では省略している。 The information server 4 includes a judgment model storage unit 21 and a server communication unit 12 in addition to the operation mode judgment model generation device 2. Although the information server 4 provides various types of traffic information other than the driving mode determination model to the vehicle, it is omitted in the present embodiment.
 判断モデル記憶部21は、運転モード判断モデル生成装置2により生成された運転モード判断モデルを記憶するものであり、サーバー通信部12を介して車両からの要求信号を受けると、記憶している運転モード判断モデルをサーバー通信部12を介して要求元の車両に提供するものである。 The judgment model storage unit 21 stores the operation mode judgment model generated by the operation mode judgment model generation device 2, and upon receiving a request signal from a vehicle via the server communication unit 12, stores the stored operation mode. The mode determination model is provided to the requesting vehicle via the server communication unit 12.
 サーバー通信部12は、車両通信部11と種々の通信を行い、本実施の形態では特に
、判断モデル記憶部21から取得した運転モード判断モデルを車両通信部11に出力するものである。
The server communication unit 12 performs various communications with the vehicle communication unit 11, and in this embodiment, in particular, outputs the driving mode determination model acquired from the determination model storage unit 21 to the vehicle communication unit 11.
 次に、運転モード判断モデル生成装置2の構成について説明する。運転モード判断モデル生成装置2は、前述した運転支援装置1により評価値を算出する際に用いる運転モード判断モデルを生成する装置である。
 図1に示すように、運転モード判断モデル生成装置2は、学習用データ記憶部22、周辺環境記述部23、形態素抽出部24、数値ベクトル化部25、運転者状態特徴量抽出部26、運転モード判断モデル生成部27を備える。
Next, the configuration of the operation mode determination model generation device 2 will be described. The driving mode determination model generation device 2 is a device that generates a driving mode determination model used when calculating an evaluation value by the driving support device 1 described above.
As shown in FIG. 1, the driving mode determination model generation device 2 includes a learning data storage unit 22, a surrounding environment description unit 23, a morphological extraction unit 24, a numerical vectorization unit 25, a driver state feature amount extraction unit 26, A mode determination model generator 27 is provided.
 学習用データ記憶部22は、運転モード判断モデルを生成するための学習用データを記憶するものであり、運転モード判断モデル生成時に、学習用データを周辺環境記述部23、運転者状態特徴量抽出部26、運転モード判断モデル生成部27へ記憶している学習用データを出力するものである。 The learning data storage unit 22 stores learning data for generating a driving mode determination model. When the driving mode determination model is generated, the learning data storage unit 22 stores the learning data in the surrounding environment description unit 23 and the driver state feature amount extraction. The unit 26 outputs the learning data stored in the driving mode determination model generation unit 27.
 ここで学習用データについて説明する。学習用データ記憶部22に記憶された学習用データの例を図2に示す。図2において、横の1列が一つのレコードを示し、一つのレコードは対になった個別の周辺環境検知データと運転者状態検知データ、およびこれらデータの対それぞれに対応する適切な運転モードを示す運転モード情報としての運転モードデータからなる。学習用データは、このレコードが多数集まった集合で構成される。また、学習用データには「No.」で表される識別子が付与されている。
 このような学習用データは、例えば、実際に車両を走行させたときの周辺環境と運転者状態を実測するとともに、そのときの適切な運転モードを指定することで得られる。具体的には、図1で説明した周辺環境検知部5と運転者状態検知部6と同様の検知装置を車両に取り付け、その車両を実際の種々の環境下で走行させて、走行中にそれぞれの検知装置が同じタイミングで検知したデータを周辺環境検知データと運転者状態検知データとして取得する。さらに、車両走行中あるいは走行後に、様々な場面に適した運転モードを車両に搭乗した評価者が指定し、周辺環境検知データと運転者状態検知データをそれらの検知データと適した運転モードと対応づけることで、図2に示すような学習用データを得る。この学習用データを生成する際、手動運転をしているときの検知データに運転モードとして手動運転を対応付けてよいのはもちろんだが、手動運転をしているときの検知データであっても対応する運転モードとして自動運転が適していると判断すれば、自動運転を指定してもよい。逆に、自動運転走行時でも適した運転モードとして手動運転を指定してもよい。
 このようにして得られた学習用データは、様々な周辺環境と運転者状態の組み合わせとその組み合わせに対応する適切な運転モードとのデータセットを大量に集めたものであり、これを分析して数値として扱えるモデルを構築することにより、車両の周辺環境と運転者の状態それぞれの、自動運転と手動運転それぞれに対する影響度を示す運転モード判断モデルを得ることができる。この分析にあたって、運転モード判断モデル生成装置2は、学習用データ記憶部22に記憶された学習用データのうち、周辺環境検知データと運転者状態検知データをそれぞれ数値化し、統計的手法を用いて運転モード判断モデルを生成するものである。
Here, the learning data will be described. FIG. 2 shows an example of learning data stored in the learning data storage unit 22. In FIG. 2, one row indicates one record, and one record indicates a pair of individual surrounding environment detection data and driver state detection data, and an appropriate driving mode corresponding to each of these data pairs. It consists of operation mode data as operation mode information to be shown. The learning data is composed of a set of many records. An identifier represented by “No.” is assigned to the learning data.
Such learning data is obtained by, for example, actually measuring the surrounding environment and the driver's state when the vehicle is actually driven, and specifying an appropriate driving mode at that time. Specifically, a detection device similar to the surrounding environment detection unit 5 and the driver state detection unit 6 described with reference to FIG. 1 is attached to a vehicle, and the vehicle is driven under various actual environments. The data detected by the detecting devices at the same timing are acquired as surrounding environment detection data and driver state detection data. Furthermore, the evaluator who boarded the vehicle specified driving modes suitable for various situations during or after the vehicle was running, and the surrounding environment detection data and driver status detection data corresponded to those detection data and the appropriate driving mode. By doing so, learning data as shown in FIG. 2 is obtained. When generating the learning data, it is of course possible to associate the manual operation as the operation mode with the detection data when the manual operation is performed, but the detection data during the manual operation is also supported. If it is determined that automatic operation is suitable as the operation mode to be performed, automatic operation may be designated. Conversely, manual operation may be designated as a suitable operation mode even during automatic driving traveling.
The learning data obtained in this way is a large collection of data sets of various combinations of surrounding environments and driver states and appropriate driving modes corresponding to the combinations. By constructing a model that can be treated as a numerical value, it is possible to obtain an operation mode determination model indicating the degree of influence of the surrounding environment of the vehicle and the state of the driver on each of the automatic driving and the manual driving. In this analysis, the driving mode determination model generation device 2 digitizes the surrounding environment detection data and the driver state detection data among the learning data stored in the learning data storage unit 22, and uses a statistical method. An operation mode determination model is generated.
 周辺環境記述部23は、学習用データ記憶部22から入力された周辺環境検知データを用いて、周辺環境を自然言語による文字列で記述された文字列情報を生成し、生成された文字列情報を形態素抽出部24へ出力するものである。
 形態素抽出部24は、周辺環境記述部23で生成された文字列情報に形態素解析を行い、形態素を抽出し、数値ベクトル化部25へ出力するものである。
 数値ベクトル化部25は、形態素抽出部24で抽出された形態素を数値ベクトル化し、運転モード判断モデル生成部27へ出力するものである。
The surrounding environment description unit 23 uses the surrounding environment detection data input from the learning data storage unit 22 to generate character string information in which the surrounding environment is described by a character string in a natural language, and generates the generated character string information. Is output to the morpheme extraction unit 24.
The morpheme extracting unit 24 performs a morphological analysis on the character string information generated by the peripheral environment description unit 23, extracts morphemes, and outputs the extracted morphemes to the numerical vectorization unit 25.
The numerical vectorization unit 25 converts the morpheme extracted by the morpheme extraction unit 24 into a numerical vector and outputs the vector to the operation mode determination model generation unit 27.
 また、運転者状態特徴量抽出部26は、学習用データ記憶部22から入力された運転者状態検知データを用いて、運転者状態の生体信号特徴量を抽出し、運転モード判断モデル生成部27へ出力するものである。 In addition, the driver state feature amount extraction unit 26 extracts the biometric signal feature amount of the driver state using the driver state detection data input from the learning data storage unit 22, and outputs the driving mode determination model generation unit 27. Output to
 運転モード判断モデル生成部27は、周辺環境情報として数値ベクトル化部25で出力された数値ベクトル、運転者状態情報として運転者状態特徴量抽出部26で出力された運転者の生体信号特徴量、そして学習用データ記憶部22に記憶された周辺環境情報と運転者状態情報に対応する運転モードデータを入力に用いて、運転モード判断モデルを生成する。例えば、人が居眠り運転をしているときには、自動運転モードで走行を行うべきだが、居眠り運転時には心拍数が小さくなる。そのため、自動運転が適切な運転モードかどうかを判断するには、運転者状態特徴量として心拍間隔が重要であり、運転モード判断モデルのラベル「自動運転」に対応する心拍間隔の影響度は大きくなる。運転モード判断モデル生成部27は、学習用データに含まれる全てのデータに対して上記と同様の処理を行い、各運転モードに対する、周辺環境情報の数値ベクトルの各成分と運転者状態情報の各生体信号特徴量の影響度を最適化し、運転モード判断モデルを生成する。生成された運転モード判断モデルは判断モデル記憶部21へ出力する。 The driving mode determination model generation unit 27 includes a numerical vector output by the numerical vectorization unit 25 as surrounding environment information, a driver's biological signal characteristic amount output by the driver state characteristic amount extraction unit 26 as driver state information, Then, using the surrounding environment information and the driving mode data corresponding to the driver state information stored in the learning data storage unit 22, the driving mode determination model is generated. For example, when a person is driving asleep, the vehicle should run in the automatic driving mode, but the heart rate decreases during the driving asleep. Therefore, in order to determine whether the automatic driving is in an appropriate driving mode, the heartbeat interval is important as a driver state feature, and the influence of the heartbeat interval corresponding to the label "automatic driving" of the driving mode determination model is large. Become. The driving mode determination model generation unit 27 performs the same processing as described above for all data included in the learning data, and for each driving mode, each component of the numerical value vector of the surrounding environment information and each of the driver state information. The degree of influence of the biological signal feature is optimized, and a driving mode determination model is generated. The generated operation mode determination model is output to the determination model storage unit 21.
 次に、本実施の形態のハードウェア構成について説明する。 Next, the hardware configuration of the present embodiment will be described.
 図3は図1に示した運転制御装置3、情報サーバー4等のハードウェア構成を示す構成図であり、CPU(Central Processing Unit)等の処理装置28、29と、ROM(Read Only Memory)やハードディスク装置等の記憶装置30、記憶装置31と、センサー等の入力装置32、入力装置33、入力装置34と、スピーカーやディスプレイ等の出力装置35と、通信装置36、通信装置37とがバス接続された構成となっている。なお、CPUは自身にメモリを備えていてもよい。 FIG. 3 is a configuration diagram showing a hardware configuration of the operation control device 3, the information server 4, and the like shown in FIG. 1, and includes processing devices 28 and 29 such as a CPU (Central Processing Unit), a ROM (Read Only Memory), and a ROM (Read Only Memory). A storage device 30 such as a hard disk device, a storage device 31, an input device 32 such as a sensor, an input device 33, an input device 34, an output device 35 such as a speaker or a display, a communication device 36, and a communication device 37 are bus-connected. It is the configuration that was done. Note that the CPU may have its own memory.
 図1に示す周辺環境検知部5は入力装置32、運転者状態検知部6は入力装置33、運転モード入力部9は入力装置34により実現され、運転モード提示部8は出力装置35により実現される。 The surrounding environment detecting unit 5 shown in FIG. 1 is realized by the input device 32, the driver state detecting unit 6 is realized by the input device 33, the driving mode input unit 9 is realized by the input device 34, and the driving mode presenting unit 8 is realized by the output device 35. You.
 判断モデル記憶部16に記憶されるデータは記憶装置30に記憶される。また、周辺環境記述部17、形態素抽出部18、数値ベクトル化部19、運転者状態特徴量抽出部20、運転モード評価部15、運転モード候補選択部7、運転制御部10は記憶装置30に記憶されたプログラムが処理装置28で実行されることにより実現される。 The data stored in the judgment model storage unit 16 is stored in the storage device 30. In addition, the peripheral environment description unit 17, the morpheme extraction unit 18, the numerical vectorization unit 19, the driver state feature amount extraction unit 20, the operation mode evaluation unit 15, the operation mode candidate selection unit 7, and the operation control unit 10 are stored in the storage device 30. This is realized by executing the stored program in the processing device 28.
 処理装置28は、記憶装置30に記憶されるプログラムを適宜読みだして実行することにより、周辺環境記述部17、形態素抽出部18、数値ベクトル化部19、運転者状態特徴量抽出部20、運転モード評価部15、運転モード候補選択部7、運転制御部10における機能を実現する。なお、上記の機能は、ハードウェアとソフトウェアの組み合わせに限らず、処理装置28に上記プログラムをインプリメントし、システムLSI(Large Scale Integrated Circuit)のようにハードウェア単体で実現するようにしてもよい。 The processing device 28 appropriately reads and executes the program stored in the storage device 30 to execute the peripheral environment description unit 17, the morphological extraction unit 18, the numerical vectorization unit 19, the driver state feature amount extraction unit 20, The functions of the mode evaluation unit 15, the operation mode candidate selection unit 7, and the operation control unit 10 are realized. Note that the above functions are not limited to the combination of hardware and software, and the above programs may be implemented in the processing device 28 and realized by a single piece of hardware such as a system LSI (Large Scale Integrated Integrated Circuit).
 判断モデル記憶部21と学習用データ記憶部22に記憶されるデータは記憶装置31に記憶される。また、周辺環境記述部23、形態素抽出部24、数値ベクトル化部25、運転者状態特徴量抽出部26、運転モード判断モデル生成部27は記憶装置31に記憶されたプログラムが処理装置29で実行されることにより実現される。 The data stored in the judgment model storage unit 21 and the learning data storage unit 22 are stored in the storage device 31. The peripheral environment description unit 23, the morphological extraction unit 24, the numerical vectorization unit 25, the driver state feature amount extraction unit 26, and the driving mode judgment model generation unit 27 execute the program stored in the storage device 31 by the processing device 29. It is realized by doing.
 処理装置29は、記憶装置31に記憶されるプログラムを適宜読みだして実行することにより、周辺環境記述部23、形態素抽出部24、数値ベクトル化部25、運転者状態特徴量抽出部26、運転モード判断モデル生成部27における機能を実現する。なお、上記の機能は、処理装置28の場合と同様、ハードウェアとソフトウェアの組み合わせに限らず、処理装置29に上記プログラムをインプリメントし、ハードウェア単体で実現するようにしてもよい。 The processing device 29 appropriately reads and executes the program stored in the storage device 31 to execute the surrounding environment description unit 23, the morphological extraction unit 24, the numerical vectorization unit 25, the driver state feature amount extraction unit 26, The function of the mode determination model generation unit 27 is realized. Note that the above functions are not limited to the combination of hardware and software, as in the case of the processing device 28, and the above-described program may be implemented in the processing device 29 and realized by hardware alone.
 車両通信部11は通信装置36により実現され、サーバー通信部12は通信装置37により実現される。 The vehicle communication unit 11 is realized by the communication device 36, and the server communication unit 12 is realized by the communication device 37.
 次に、本実施の形態における動作について説明する。 Next, the operation in the present embodiment will be described.
 本実施の形態における運転制御装置3、周辺環境検知部5、運転者状態検知部6の動作について図4のフローチャートを用いて説明する。 The operation of the driving control device 3, the surrounding environment detecting unit 5, and the driver state detecting unit 6 in the present embodiment will be described with reference to the flowchart of FIG.
 まず、ステップS1において、周辺環境検知部5が車両の周辺環境を各種センサーにより検知する。検知する周辺環境の例としては、歩行者や他の車両、道路上に存在する構造物、障害物等であり、各種センサーとしては車両に搭載され、前方を撮像するカメラである。周辺環境検知部5は検知した周辺環境を表す周辺環境検知データを周辺環境情報出力部13へと出力する。 First, in step S1, the surrounding environment detecting unit 5 detects the surrounding environment of the vehicle using various sensors. Examples of the surrounding environment to be detected include pedestrians and other vehicles, structures existing on roads, obstacles, and the like, and various sensors are mounted on the vehicle and are cameras for imaging the front. The surrounding environment detection unit 5 outputs surrounding environment detection data representing the detected surrounding environment to the surrounding environment information output unit 13.
 ステップS2では、ステップS1で周辺環境検知部5により得られた周辺環境検知データを周辺環境情報出力部13が受け取ると、周辺環境情報出力部13の中の周辺環境記述部17が、受け取った周辺環境検知データから自然言語による文字列で記述された文字列情報を生成する。例えば、周辺環境検知データとして、周辺環境検知部5により車両前方を撮像した画像データが送られると、周辺環境記述部17では、撮像された画像が示す周辺状況を文字列化、すなわち周辺状況を自然言語による文字列で記述された文字列情報を生成する。
 このステップS2の動作の一例について説明する。図5は、走行中の車両38と車両38の周辺環境の例を表す概要図である。図5において車両38は一般道39を矢印の方向に走行しており、交差点付近の右側の歩道の角に歩行者40が2人、交差点付近の左側の歩道の角に歩行者40が1人いる状況を示している。この状況下で、車両38から前方を撮像した画像データを受け取った周辺環境記述部17は、画像認識処理を行い、撮像した画像を自然言語で表す文字列情報を生成する。この画像処理と文字列化は、例えば機械学習のモデルである畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)や再帰型ニューラルネットワーク(RNN:Recurrent Neural Network)を用いることで行われる。その結果、図5の例では、例えば、「一般道路を走行中、前方に歩行者が3人いる」という文字列で記述された文字列情報が生成される。生成された文字列情報は、形態素抽出部18へ出力される。
In step S2, when the peripheral environment information output unit 13 receives the peripheral environment detection data obtained by the peripheral environment detection unit 5 in step S1, the peripheral environment description unit 17 in the peripheral environment information output unit 13 transmits the received peripheral environment data. Character string information described as a character string in a natural language is generated from the environment detection data. For example, when image data obtained by imaging the front of the vehicle by the surrounding environment detection unit 5 is sent as the surrounding environment detection data, the surrounding environment description unit 17 converts the surrounding situation indicated by the captured image into a character string, that is, converts the surrounding situation into a character string. Generate character string information described by a character string in a natural language.
An example of the operation in step S2 will be described. FIG. 5 is a schematic diagram illustrating an example of the running vehicle 38 and the surrounding environment of the vehicle 38. In FIG. 5, the vehicle 38 is traveling on a general road 39 in the direction of the arrow, and two pedestrians 40 are at the corner of the right sidewalk near the intersection and one pedestrian 40 is at the left side corner near the intersection. Shows the situation in which Under this circumstance, the peripheral environment description unit 17 that has received the image data of the front imaged from the vehicle 38 performs image recognition processing, and generates character string information representing the captured image in a natural language. The image processing and the character string conversion are performed by using, for example, a convolutional neural network (CNN) or a recurrent neural network (RNN), which is a model of machine learning. As a result, in the example of FIG. 5, for example, character string information described as a character string “Three pedestrians are ahead while traveling on a general road” is generated. The generated character string information is output to the morpheme extraction unit 18.
 ステップS2で周辺環境記述部17により生成された文字列情報が形態素抽出部18へ出力されると、ステップS3において、形態素抽出部18が、送られた文字列情報に対して形態素解析を行い、形態素を抽出する。形態素解析とは、文字列を形態素に分節する処理であり、例えば、「一般道路を走行中、前方に歩行者が3人いる」という文字列を形態素解析すると、「『一般道路』『を』『走行中』『前方』『に』『歩行者』『が』『3』『人』『いる』」という形態素に区切られる。形態素抽出部18は抽出した形態素を数値ベクトル化部19へ出力する。 When the character string information generated by the peripheral environment description unit 17 in step S2 is output to the morpheme extraction unit 18, the morpheme extraction unit 18 performs morphological analysis on the sent character string information in step S3. Extract morphemes. The morphological analysis is a process of segmenting a character string into morphemes. For example, when a character string "running on a general road and there are three pedestrians ahead" is morphologically analyzed, "" It is divided into morphemes “running”, “forward”, “ni”, “pedestrian”, “ga”, “3”, “people”, and “is”. The morpheme extraction unit 18 outputs the extracted morpheme to the numerical vectorization unit 19.
 ステップS4では、ステップS3において形態素抽出部18で抽出された形態素を数値ベクトル化部19が数値ベクトル化する。ここでは、数値ベクトル化の手法としてOne-Hotモデルを用いて数値ベクトル化を行う例を説明する。図6は、ステップS3で得られた「『一般道路』『を』『走行中』『前方』『に』『歩行者』『が』『3』『人』『いる』」といった形態素41を数値ベクトル化した場合の次元、形態素、得られたベクトルの値の対応を示す対応表である。図6の対応表では次元数が16の部分までを示しているが、次元数は多いほど状況を表現する粒度は細かくなると考えられる。また、図6の例では助詞は形態素の一つの次元に対応させて扱っていないが、助詞を扱うようにしてもよい。
 数値ベクトル化部19は生成した数値ベクトルを、運転モード評価部15へ出力する。
In step S4, the morpheme extracted by the morpheme extraction unit 18 in step S3 is converted into a numerical vector by the numerical vectorization unit 19. Here, an example will be described in which the One-Hot model is used to perform numerical vectorization as a numerical vectorization method. FIG. 6 shows the morphemes 41 such as "" general road ",""," running "," forward "," ni "," pedestrian "," ga "," 3 "," people "and" is "obtained in step S3. 9 is a correspondence table showing correspondence between dimensions, morphemes, and obtained vector values when vectorized. Although the correspondence table of FIG. 6 shows up to a portion having 16 dimensions, it is considered that the greater the number of dimensions, the finer the granularity for expressing the situation. Further, in the example of FIG. 6, particles are not handled in correspondence with one dimension of a morpheme, but particles may be handled.
The numerical vectorization unit 19 outputs the generated numerical vector to the operation mode evaluation unit 15.
 以上のステップS1~S4で周辺環境情報の取得と処理が行われ、これらの動作と並行して、ステップS5から始まる運転者状態情報取得と処理の動作が行われる。ステップS5では、ステップS1で周辺環境検知部5が車両の周辺環境を各種センサーにより検知したときに対応したタイミングで、運転者状態検知部6が運転者の状態を生体センサーやカメラで検知する。運転者の状態を検知する例としては、上述のように心電計により心電信号を、カメラにより眼球の画像データを取得する場合がある。
 運転者状態検知部6は検知した運転者状態情報を運転者状態検知データとして運転者状態情報出力部14に出力する。
Acquisition and processing of the surrounding environment information are performed in steps S1 to S4 described above. In parallel with these operations, the operation of the driver state information acquisition and processing starting from step S5 is performed. In step S5, the driver state detecting unit 6 detects the driver's state with a biological sensor or a camera at a timing corresponding to when the surrounding environment detecting unit 5 detects the surrounding environment of the vehicle with various sensors in step S1. As an example of detecting the driver's state, there is a case where an electrocardiogram is obtained by an electrocardiograph and eyeball image data is obtained by a camera as described above.
The driver status detection unit 6 outputs the detected driver status information to the driver status information output unit 14 as driver status detection data.
 ステップS6では、ステップS5で運転者状態検知部6から送られた運転者状態検知データを用いて、運転者状態情報出力部14の中の運転者状態特徴量抽出部20が運転者状態の生体信号特徴量を抽出する。例えば、運転者状態特徴量抽出部20は、運転者状態検知データとしての心電信号から特徴量として、心拍間隔、QRS幅、QRS波高さを抽出し、運転者状態検知データとしての眼球の画像データから特徴量として瞳孔径を抽出する。図7に抽出された運転者状態特徴量の例を示す。運転者状態特徴量抽出部20は抽出した運転者状態特徴量を運転モード評価部15へ出力する。
 これらステップS5、S6で運転者状態情報取得との処理の動作が行われる。
In step S6, the driver state feature amount extraction unit 20 in the driver state information output unit 14 uses the driver state detection data sent from the driver state detection unit 6 in step S5, and the living state of the driver is displayed. Extract the signal features. For example, the driver state characteristic amount extraction unit 20 extracts a heartbeat interval, a QRS width, and a QRS wave height as characteristic amounts from the electrocardiogram signal as the driver state detection data, and extracts an image of an eyeball as the driver state detection data. The pupil diameter is extracted from the data as a feature amount. FIG. 7 shows an example of the extracted driver state feature quantity. The driver state characteristic amount extraction unit 20 outputs the extracted driver state characteristic amount to the driving mode evaluation unit 15.
In these steps S5 and S6, the operation of the processing for obtaining the driver state information is performed.
 そして、上記ステップS1~S4の動作の結果、生成された数値ベクトルが運転モード評価部15へ出力され、ステップS5~S6の動作の結果、運転者状態特徴量が運転モード評価部15へ出力されることになる。これらの動作の後、ステップS7に進み、ステップS7で、運転モード評価部15は、数値ベクトル化部19から入力される数値ベクトルと運転者状態特徴量抽出部20から入力される運転者状態特徴量と判断モデル記憶部16に記憶された運転モード判断モデルに基づいて、ステップS1、S5で周辺環境情報、運転者状態情報を取得した状況に対する手動運転と自動運転の適性度の評価値を算出する。 Then, as a result of the operations in steps S1 to S4, the generated numerical vector is output to the driving mode evaluation unit 15, and as a result of the operations in steps S5 to S6, the driver state feature amount is output to the driving mode evaluation unit 15. Will be. After these operations, the process proceeds to step S7. In step S7, the driving mode evaluation unit 15 determines the numerical value vector input from the numerical value vectorizing unit 19 and the driver state characteristic input from the driver state characteristic amount extracting unit 20. Based on the amount and the driving mode judgment model stored in the judgment model storage unit 16, the evaluation values of the suitability of manual driving and automatic driving with respect to the situation where the surrounding environment information and the driver state information are obtained in steps S1 and S5 are calculated. I do.
 以下、この評価値の算出方法について説明する。図8はこの評価値の算出に用いる運転モード判断モデルの概念の一例を示す概要図である。運転モード判断モデルは、周辺環境情報である数値ベクトルの各成分と運転者状態情報である各運転者状態特徴量について、自動運転と手動運転に対する影響度を示すものであり、図8に示した運転モード判断モデルは、影響度が数値として表されている。また、ここでは、運転モード判断モデルに示された周辺環境情報と運転者状態情報それぞれの自動運転と手動運転それぞれに対する影響度は数値で表されていたが、この影響度は入力に用いられる周辺環境情報と運転者状態情報それぞれの自動運転と手動運転それぞれとの対応を表していればよく、例えば関数で表されていてもよい。なお、この運転モード判断モデルにおいて、数値ベクトルの各成分と各運転者状態特徴量の影響度を各運転モードそれぞれに対してひとまとまりにしたものを影響度ベクトルと呼ぶ。 Hereinafter, a method of calculating the evaluation value will be described. FIG. 8 is a schematic diagram showing an example of the concept of an operation mode determination model used for calculating the evaluation value. The driving mode determination model indicates the degree of influence on the automatic driving and the manual driving for each component of the numerical vector that is the surrounding environment information and each driver state feature amount that is the driver state information, and is shown in FIG. In the operation mode determination model, the degree of influence is represented as a numerical value. In addition, here, the degree of influence of each of the surrounding environment information and the driver state information indicated in the driving mode determination model on the automatic driving and the manual driving is represented by a numerical value. What is necessary is just to represent the correspondence between the automatic operation and the manual operation of each of the environmental information and the driver state information, and may be represented by a function, for example. In this driving mode determination model, a unit in which the influence of each component of the numerical value vector and each driver state feature amount is integrated for each driving mode is called an influence vector.
 運転モード評価部15は、まず、数値ベクトル化部19から入力された数値ベクトルと運転者状態特徴量抽出部20から入力される運転者状態特徴量を組み合わせて、周辺環境と運転者状態の両方の情報を示す特徴量ベクトルを生成する。ここで、例えば、数値ベクトル化部19から入力されたベクトルAがn次元ベクトルであり、その成分がA=(a,a,…,a)と表され、運転者状態特徴量抽出部20から入力された運転者状態特徴量がm個あり、その値がそれぞれ、b,b,…,bであったとき、生成される特徴量ベクトルCは(n+m)次元ベクトルであり、その成分はC=(a,a,…,a,b,b,…,b)と表される。
 次に、運転モード評価部15は、各運転モードに対する評価値を算出する。具体的には、数式(1)に従い、特徴量ベクトルと影響度ベクトルの内積を計算する。数式(1)における分母は規格化定数である。このとき、入力に使われる特徴量ベクトルは0か1の2値に変換されたものとする。
The driving mode evaluation unit 15 first combines the numerical value vector input from the numerical value vectorization unit 19 and the driver state characteristic amount input from the driver state characteristic amount extraction unit 20 to obtain both the surrounding environment and the driver state. Is generated. Here, for example, the vector A input from the numerical vectorization unit 19 is an n-dimensional vector, and its component is represented as A = (a 1 , a 2 ,..., An ), and the driver state feature amount is extracted. When there are m driver state feature amounts input from the unit 20 and their values are b 1 , b 2 ,..., B m , the generated feature amount vector C is an (n + m) -dimensional vector. And its components are represented as C = (a 1 , a 2 ,..., An , b 1 , b 2 ,..., B m ).
Next, the operation mode evaluation unit 15 calculates an evaluation value for each operation mode. Specifically, the inner product of the feature quantity vector and the influence vector is calculated according to equation (1). The denominator in Expression (1) is a normalization constant. At this time, it is assumed that the feature value vector used for input has been converted into a binary value of 0 or 1.
Figure JPOXMLDOC01-appb-M000001
Figure JPOXMLDOC01-appb-M000001
 ここで、Xは特徴量ベクトル、Yは運転モードでi=1のとき自動運転、i=2のとき手動運転を表す。P(Y|X)はXを入力としたときのYの評価値、Wは各運転モードの影響度ベクトルを表し、i=1のとき自動運転、i=2のとき手動運転に対する影響度ベクトルを表す。Wの添え字iはベクトルの成分を表すのではなく、単に自動運転に対する影響度ベクトルと手動運転に対する影響度ベクトルを区別するための添え字である。 Here, X is the feature quantity vectors, Y i represents a manual operation when in operation mode automatic operation when i = 1, the i = 2. P (Y i | X) is an evaluation value of Y i when X is input, and W i is an influence vector of each operation mode. When i = 1, automatic operation is performed, and when i = 2, manual operation is performed. Represents the influence vector. The suffix i of Wi does not represent a vector component, but is merely a suffix for distinguishing between an influence vector for automatic operation and an influence vector for manual operation.
 また、各運転モードに対する評価値を算出する数式はこれに限られず、運転モード判断モデルに基づき、各特徴量の影響度を反映されるような数式であればよく、例えば、数式(2)により計算してもよい。 Further, the formula for calculating the evaluation value for each driving mode is not limited to this, and may be any formula that reflects the degree of influence of each feature based on the driving mode determination model. It may be calculated.
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
各文字の意味は数式(1)と同様である。 The meaning of each character is the same as in equation (1).
 図9は周辺環境情報と運転者状態情報の組み合わせに対する手動運転と自動運転の適性度の評価値を算出した例を示す図であり、この例で示す状況では、手動運転に対する適性度の評価値が0.2、自動運転に対する適性度の評価値が0.8と算出されている。
 得られた評価値は、運転モード候補選択部7へと出力される。
FIG. 9 is a diagram showing an example in which the evaluation values of the suitability of the manual driving and the automatic driving for the combination of the surrounding environment information and the driver state information are calculated. In the situation shown in this example, the evaluation value of the suitability of the manual driving is shown. Is calculated to be 0.2, and the evaluation value of the suitability for automatic driving is calculated to be 0.8.
The obtained evaluation value is output to the driving mode candidate selection unit 7.
 ステップS8では、運転モード候補選択部7は、運転モード評価部15で得られた評価値に基づいて、現在の周辺環境と運転者状態に適切な運転モードの候補を選択する。本実施の形態では、自動運転と手動運転の評価値の差に閾値を定め、閾値を超えた時に、評価値の大きい方の運転モードを適切な運転モード候補として設定する。例えば、閾値を「0.5」と設定すると、図9の評価値の結果については自動運転の評価値と手動運転の評価値の差が0.6であるため閾値を超えており、評価値の大きい自動運転が適切な運転モード候補として選択される。なお、閾値を超えなかったときは現在適用している運転モードを候補として選択する。なお、ここでは、閾値を設定する方法を用いたが、より単純に、最終的な評価値が大きい方の運転モードを適切な運転モード候補として選択するようにしてもよい。 In step S <b> 8, the driving mode candidate selection unit 7 selects a driving mode candidate appropriate for the current surrounding environment and the driver state based on the evaluation value obtained by the driving mode evaluation unit 15. In the present embodiment, a threshold value is set for the difference between the evaluation values of the automatic operation and the manual operation, and when the difference exceeds the threshold value, the operation mode having the larger evaluation value is set as an appropriate operation mode candidate. For example, when the threshold value is set to “0.5”, the evaluation value result in FIG. 9 exceeds the threshold value because the difference between the evaluation value of the automatic operation and the evaluation value of the manual operation is 0.6. Is selected as an appropriate driving mode candidate. If the threshold value is not exceeded, the currently applied operation mode is selected as a candidate. Here, the method of setting the threshold value is used, but more simply, the operation mode with the larger final evaluation value may be selected as an appropriate operation mode candidate.
 ステップS9では、ステップS8で運転モード候補選択部7が選択した運転モード候補が、現在の運転モードと一致しているかどうかの比較を行う。この動作も運転モード候補選択部7によって実施される。一致している場合には、ステップS10に進み、現在の運転モードが適切である旨を運転モード提示部8に提示させるとともに、現在の運転モードを継続して適用するよう指示する信号を運転制御部10に対し出力する。その後、ステップS11に進み、現在の運転モードを継続して適用するよう指示する信号を受けた運転制御部10は、現在の運転モードを継続して適用して運転制御を行う。
 また、ステップS9で、運転モード候補が現在の運転モードと一致しない場合にはステップS12に進み、運転モード候補選択部7は選択された運転モード候補への切り替えを促す内容を運転モード提示部8に提示させる。
 選択された運転モード候補が運転モード提示部8に提示されると、運転者は提示された運転モード候補への切り替えを行うか判断する。
In step S9, a comparison is made as to whether the operation mode candidate selected by the operation mode candidate selection unit 7 in step S8 matches the current operation mode. This operation is also performed by the operation mode candidate selection unit 7. If they match, the process proceeds to step S10, in which the operation mode presentation unit 8 presents that the current operation mode is appropriate, and also issues a signal indicating that the current operation mode is to be continuously applied. Output to the unit 10. Thereafter, the process proceeds to step S11, and the operation control unit 10 receiving the signal instructing to continuously apply the current operation mode performs the operation control by continuously applying the current operation mode.
If the driving mode candidate does not match the current driving mode in step S9, the process proceeds to step S12, in which the driving mode candidate selecting unit 7 displays the content for urging the user to switch to the selected driving mode candidate. To be presented.
When the selected driving mode candidate is presented to the driving mode presentation unit 8, the driver determines whether to switch to the presented driving mode candidate.
 次にステップS13に進み、運転制御部10が、運転者が運転モード入力部9を介して運転モードの切り替え指示があったか否かを待ち、運転モードの切り替え指示があった場合、ステップS14に進み、運転制御部10は運転モードをステップS8で選択された運転モード候補に切り替えて運転制御を行う。また、ステップS13で運転者が運転モードの切り替え指示がなかった場合、例えば現在の運転モードを継続する指示が入力された場合、あるいは運転モード候補の切り替えを促す内容を運転モード提示部8に提示させてから一定時間何も入力されなかった場合にはステップS11に進み、運転制御部10は、現在の運転モードを継続して適用し、運転制御を行う。 Next, proceeding to step S13, the operation control unit 10 waits whether the driver has issued an instruction to switch the operation mode via the operation mode input unit 9, and if there is an instruction to switch the operation mode, proceeds to step S14. The operation control unit 10 performs operation control by switching the operation mode to the operation mode candidate selected in step S8. Further, when the driver does not give an instruction to switch the driving mode in step S13, for example, when an instruction to continue the current driving mode is input, or presents a content urging the switching of the driving mode candidate to the driving mode presentation unit 8. If nothing has been input for a certain period of time after this, the operation proceeds to step S11, where the operation control unit 10 continues to apply the current operation mode and performs operation control.
 運転支援装置1、運転制御装置3、周辺環境検知部5、運転者状態検知部6は、以上のステップS1~S14で示される動作を繰り返し、運転支援を行う。 The driving support device 1, the driving control device 3, the surrounding environment detecting unit 5, and the driver state detecting unit 6 repeat the operations shown in steps S1 to S14 to perform driving support.
 以上のような運転支援装置1、運転制御装置3、周辺環境検知部5、運転者状態検知部6の動作により、車両の周辺環境情報と運転者状態情報の組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出するようにしたため、この評価値を利用して、周辺環境と運転者状態に応じた適切な運転モードの候補を運転者に提示でき、運転者が適切な運転モードを決定することを支援することができる。 The suitability of the automatic driving and the manual driving for the combination of the vehicle surrounding environment information and the driver state information by the operation of the driving support device 1, the driving control device 3, the surrounding environment detecting unit 5, and the driver state detecting unit 6 as described above. Since the evaluation value representing the degree is calculated, it is possible to use the evaluation value to present a suitable driving mode candidate according to the surrounding environment and the driver state to the driver, and the driver can select an appropriate driving mode. Can help you decide.
 なお、図4のフローチャートでは、車両通信部11の動作について省略したが、車両通信部11は、ステップS7より前の任意のタイミングで、サーバー通信部12と通信し情報サーバー4から運転モード判断モデルを取得する。 Although the operation of the vehicle communication unit 11 is omitted in the flowchart of FIG. 4, the vehicle communication unit 11 communicates with the server communication unit 12 at an arbitrary timing before step S <b> 7 and transmits a driving mode determination model from the information server 4. To get.
 次に、図10のフローチャートを用いて、情報サーバー4の動作について説明する。
 まず、ステップS101で、周辺環境記述部23が学習用データ記憶部22に記憶された周辺環境検知データから、周辺環境を自然言語による文字列で記述された文字列情報を生成する。生成された文字列情報は形態素抽出部24に出力される。
Next, the operation of the information server 4 will be described using the flowchart of FIG.
First, in step S101, the surrounding environment description unit 23 generates character string information in which the surrounding environment is described by a character string in a natural language from the surrounding environment detection data stored in the learning data storage unit 22. The generated character string information is output to the morpheme extraction unit 24.
 次に、ステップS102では、ステップS101で周辺環境記述部23により生成された文字列情報に対して、形態素抽出部24が形態素解析を行い、形態素を抽出する。抽出された形態素は数値ベクトル化部25に出力される。 Next, in step S102, the morpheme extraction unit 24 performs morpheme analysis on the character string information generated by the peripheral environment description unit 23 in step S101, and extracts morphemes. The extracted morpheme is output to the numerical vectorization unit 25.
 ステップS103では、ステップS102で形態素抽出部24より得られた形態素を数値ベクトル化部25が数値ベクトルへと変換する。変換された数値ベクトルは運転モード判断モデル生成部27に出力される。 In step S103, the morpheme obtained by the morpheme extraction unit 24 in step S102 is converted into a numeric vector by the numeric vectoring unit 25. The converted numerical vector is output to the operation mode determination model generation unit 27.
 ステップS104では、運転者状態特徴量抽出部20が学習用データ記憶部22に記憶された運転者状態検知データから運転者状態の生体信号特徴量を抽出する。抽出された運転者状態特徴量は運転モード判断モデル生成部27に出力される。 In step S104, the driver state characteristic amount extraction unit 20 extracts the biological signal characteristic amount of the driver state from the driver state detection data stored in the learning data storage unit 22. The extracted driver state feature amount is output to the driving mode determination model generation unit 27.
 ステップS105では、数値ベクトル化部25から入力される数値ベクトル、運転者状態特徴量抽出部26から入力される運転者状態特徴量、学習用データ記憶部22から入力される運転モードデータに基づいて、運転モード判断モデル生成部27が統計的手法により運転モード判断モデルを生成する。例えば、最大エントロピー法を用いて生成することが出来る。具体的には、数1や数2のように、実現確率Pの確率分布関数の関数形を特徴づけるパラメータWiを導入して、各運転モードが実現する確率分布関数の形を仮定し、このパラメータWiを最適化する方法である。これにより最適化されたWiが運転モード判断モデルにおける影響度ベクトルであり、そのときの実現確率Pが各運転モードに対する評価値である。あるいは、周辺環境情報である数値ベクトルの各成分と運転者状態情報の各特徴量に対する自動運転と手動運転それぞれの実現確率の確率分布関数を求め、その同時確率を各運転モードに対する評価値としてもよい。ここで、判断モデルの生成及び各運転モードの評価値の算出は、統計的手法によりなされることが重要であり、上記の方法に限るものではない。
 生成された運転モード判断モデルは、判断モデル記憶部21に記憶される。
 以上のステップS101~S103は、図1の運転支援装置1による運転支援の動作として図4で説明したフローチャートのステップS2~S4と同様の動作であり、同じように、ステップS104の動作はステップS6と同様の動作である。
In step S105, based on the numerical value vector input from the numerical value vectorization unit 25, the driver state characteristic amount input from the driver state characteristic amount extraction unit 26, and the driving mode data input from the learning data storage unit 22, The operation mode determination model generation unit 27 generates an operation mode determination model by a statistical method. For example, it can be generated using the maximum entropy method. Specifically, as shown in Equations 1 and 2, a parameter W i that characterizes the function form of the probability distribution function of the realization probability P is introduced, and the form of the probability distribution function realized by each operation mode is assumed, it is a method for optimizing the parameter W i. Thus a impact vector in the optimized W i is the operation mode determination model, realized probability P at that time is an evaluation value for each operation mode. Alternatively, the probability distribution function of the realization probability of each of the automatic driving and the manual driving with respect to each component of the numerical vector that is the surrounding environment information and each feature amount of the driver state information is obtained, and the joint probability is used as an evaluation value for each driving mode. Good. Here, it is important that the generation of the judgment model and the calculation of the evaluation value of each operation mode are performed by a statistical method, and are not limited to the above method.
The generated operation mode determination model is stored in the determination model storage unit 21.
Steps S101 to S103 described above are the same operations as steps S2 to S4 in the flowchart described with reference to FIG. 4 as the operation of driving assistance by the driving assistance device 1 in FIG. 1, and similarly, the operation in step S104 is step S6. This is the same operation as.
 なお、サーバー通信部12は車両通信部11から運転モード判断モデルの要求信号を受信すると、判断モデル記憶部21に記憶された運転モード判断モデルを車両通信部11に出力する。 When the server communication unit 12 receives the request signal for the driving mode determination model from the vehicle communication unit 11, the server communication unit 12 outputs the driving mode determination model stored in the determination model storage unit 21 to the vehicle communication unit 11.
 以上のような情報サーバー4の動作により、運転支援装置1へ運転モード判断モデルが提供され、運転支援装置1は提供された運転モード判断モデルを判断モデル記憶部16に記憶することで、運転支援の動作の際、運転モードの評価に用いることができる。
 また、運転モード判断モデル生成装置2が大量の学習用データから統計的手法により、運転モード判断モデルを適切に構築することで、運転モードの選択に有用な自動運転と手動運転のそれぞれの評価値をより適切に算出することが可能となる。
By the operation of the information server 4 as described above, the driving mode determination model is provided to the driving support apparatus 1, and the driving support apparatus 1 stores the provided driving mode determination model in the determination model storage unit 16, thereby providing driving support. Can be used to evaluate the operation mode.
In addition, the operation mode determination model generation device 2 appropriately constructs the operation mode determination model from a large amount of learning data by a statistical method, and thereby each of the evaluation values of the automatic operation and the manual operation useful for selecting the operation mode. Can be calculated more appropriately.
 また、本実施の形態では、周辺環境記述部17により周辺環境情報を自然言語で文字列化し、その後、形態素抽出部18により文字列中の形態素の抽出を行ったが、この利点について説明する。実世界において、車両の周辺環境に現れる物体は多様であり、物体の組み合わせや位置関係も変化が多いので、周辺環境を表すモデルを作ろうとする際に、周辺環境を個別に設定するのは非常に手間である。
 本実施の形態では、周辺環境を個別に設定することなく、これまでのモデルで設定されていた周辺環境を自然言語データベース内に定義された形態素の組み合わせにより、表現することができ、人の手による作業が簡略化できる。また、これまでのモデルでは、モデルを設計する人が思いつかずに設定できていなかった周辺環境も形態素の組み合わせにより設定することができる。
 このとき利用する自然言語データベースは既存のものを使用してもよいし、運転モード判断モデルのために新たに作成してもよい。
Further, in the present embodiment, the surrounding environment information is converted into a character string in a natural language by the surrounding environment description unit 17, and then the morpheme in the character string is extracted by the morpheme extracting unit 18. This advantage will be described. In the real world, the objects that appear in the surrounding environment of a vehicle are diverse, and the combinations and positional relationships of the objects often change.Therefore, when creating a model that represents the surrounding environment, setting the surrounding environment individually is extremely difficult. It is troublesome.
In the present embodiment, the surrounding environment set in the previous model can be expressed by a combination of morphemes defined in the natural language database without setting the surrounding environment individually. Work can be simplified. In addition, in the conventional models, the surrounding environment, which could not be set by the person designing the model without thinking, can be set by combining the morphemes.
At this time, an existing natural language database may be used, or a natural language database may be newly created for an operation mode determination model.
 周辺環境を自然言語による文字列で記述するメリットは、自然言語データベースを使用できることだけではない。自然言語により記述された文字列に対し、知識構造データベースを用いた論理推定を行うことにより、センサーで取得された周辺環境だけでなく、センサー範囲外の状況についても情報を追加することが出来る。例えば、カメラで取得された画像内にサッカーボールが写っている場合を考える。写真内にサッカーボールが写っているとき、その画像の外では、近くで子どもが遊んでいることが考えられ、その子どもがサッカーボールを取りに急に道に飛び出して来ると、車両と衝突する可能性がある。知識構造データベースを用いた論理推定により、サッカーボールが写った画像から子どもが飛び出してくる可能性を推測することができ、それを運転者に提示することで、運転者が予め、衝突を回避する運転をすることができる。 メ リ ッ ト The advantage of describing the surrounding environment with a character string in a natural language is not only that a natural language database can be used. By performing logical estimation on a character string described in a natural language using a knowledge structure database, it is possible to add information on not only the surrounding environment acquired by the sensor but also a situation outside the sensor range. For example, consider a case where a soccer ball is captured in an image acquired by a camera. When a soccer ball is shown in the picture, it is possible that a child is playing nearby outside the image, and if the child suddenly jumps out of the way to get the soccer ball, it will collide with the vehicle there is a possibility. The logic estimation using the knowledge structure database makes it possible to infer the possibility that a child will jump out of the image of the soccer ball, and presents it to the driver so that the driver can avoid collisions in advance. You can drive.
 この例の場合、具体的には、以下のような手順で推論と情報提示が行われる。まず、画像内によりサッカーボールが存在することを認識した場合は、「サッカーボール」という文字列を記述する。この「サッカーボール」という文字列が、知識構造データベース内で「子ども」と関連付けられていれば、「子ども」に基づいた情報が提示され、運転者から見えない所に「子ども」がいる可能性が示唆されるので、運転者は子どもが飛び出してこないか注意深く確認する。さらに詳しい情報、例えば画像や動画等から「サッカーボールが道を転がっている」ことを認識した場合は、「サッカーボールが道を転がっている」という文字列を記述する。すると上記同様、この「サッカーボール」が「転がっている」ことから「子ども」が「飛び出してくる」ことを推測することが出来、より詳しい情報を運転者に提供することが出来る。 場合 In this example, inference and information presentation are performed in the following procedure. First, when it is recognized that a soccer ball exists in the image, a character string "soccer ball" is described. If the character string "soccer ball" is associated with "child" in the knowledge structure database, information based on "child" is presented, and there is a possibility that "child" is invisible to the driver The driver should carefully check that the child does not jump out. If it is recognized from further detailed information, for example, that an image or a moving image indicates that "soccer ball is rolling down the road", a character string "soccer ball is rolling down the road" is described. Then, as described above, it can be inferred that the “child” will “jump out” from the “rolling” of the “soccer ball”, and more detailed information can be provided to the driver.
 また、運転モードの評価において、特定の形態素に対応する数値ベクトルの成分の値が1となり評価値が変動することで、周辺環境により適した運転モードの評価が可能になる。例えば、画像や動画等から「サッカーボールが道を転がっている」ことを認識し文字列で記述すると、「サッカーボール」が「転がっている」ことから「子ども」が「飛び出してくる」ことが推測でき、その可能性は高い。この場合、「サッカーボール」や「転がって」など元々認識していた形態素に加えて、「子ども」「飛び出し」の形態素が推論により追加され、「子ども」「飛び出し」に対応する数値ベクトルの成分の値が1になると、例えば、手動運転の評価値が下がる一方、自動運転の評価値が上がり、急な子どもの飛び出しに運転者が気付かなかったり、運転者の反応が遅れそうなときでも強制的に自動運転を行うことで、車両と子どもの衝突の可能性を回避できる。 In addition, in the evaluation of the driving mode, the value of the component of the numerical vector corresponding to the specific morpheme becomes 1 and the evaluation value fluctuates, so that the driving mode more suitable for the surrounding environment can be evaluated. For example, recognizing that a soccer ball is rolling down the road from an image or video, etc., and describing it in a character string, it can be said that a child jumps out because a soccer ball is rolling. Can be guessed, the possibility is high. In this case, in addition to the originally recognized morphemes such as "soccer ball" and "rolling", the morphemes of "child" and "jump" are added by inference, and the components of the numerical vector corresponding to "child" and "jump" When the value of is 1, for example, while the evaluation value of manual driving decreases, the evaluation value of automatic driving increases, and even if the driver does not notice sudden child jumping out or the reaction of the driver is likely to be delayed, By automatically driving, the possibility of collision between the vehicle and the child can be avoided.
 以上説明した本実施の形態において、運転支援装置1が判断モデル記憶部16を含む構成としたが、運転支援装置1が判断モデル記憶部16を含まず、運転モード評価部15が車両通信部11とサーバー通信部12を介した通信によって、適宜、情報サーバー4にある判断モデル記憶部21を参照する構成でも良い。 In the present embodiment described above, the driving support device 1 is configured to include the judgment model storage unit 16, but the driving support device 1 does not include the judgment model storage unit 16, and the driving mode evaluation unit 15 is The configuration may be such that the judgment model storage unit 21 in the information server 4 is appropriately referred to by the communication via the server communication unit 12 and.
 本実施の形態における運転支援装置は運転モード判断モデルを用いて、周辺環境情報と運転者状態情報の組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出する構成としたが、周辺環境情報と運転者状態情報の組み合わせに基づいて、自動運転と手動運転の適性度を表す評価値を算出すればよく、運転モード判断モデルを用いない構成でも良い。例えば、周辺環境情報では、歩行者が一人増えれば自動運転の評価値を1増やし、一方手動運転の評価値を1下げる等であり、運転者状態情報では、心拍数に閾値を設定し、この閾値を超えれば、自動運転の評価値を10増やす等である。 The driving support apparatus in the present embodiment uses the driving mode determination model to calculate an evaluation value indicating the suitability of the automatic driving and the manual driving for the combination of the surrounding environment information and the driver state information. An evaluation value indicating the suitability of the automatic driving and the manual driving may be calculated based on the combination of the information and the driver state information, and a configuration without using the driving mode determination model may be used. For example, in the surrounding environment information, if the number of pedestrians increases by one, the evaluation value of automatic driving is increased by 1, while the evaluation value of manual driving is decreased by 1. For example, a threshold value is set for the heart rate in the driver state information. If the threshold value is exceeded, the evaluation value of the automatic driving is increased by 10, and so on.
 実施の形態2
 次に本発明の実施の形態2における運転支援装置1と運転モード判断モデル生成装置2について説明する。この実施の形態2は、実施の形態1の構成に加え、情報サーバー4に学習用データ追加部42を備えたものであり、運転モード入力部9に入力された運転者の判断に応じて、学習用データ追加部42により学習用データの追加を行うか否かを変えることにより、判断モデルの精度を向上するものである。
Embodiment 2
Next, a driving support device 1 and a driving mode determination model generation device 2 according to Embodiment 2 of the present invention will be described. In the second embodiment, in addition to the configuration of the first embodiment, the information server 4 includes a learning data adding unit 42. According to the driver's judgment input to the driving mode input unit 9, By changing whether or not the learning data is added by the learning data adding unit 42, the accuracy of the judgment model is improved.
 図11は、本発明を実施するための実施の形態2における運転支援装置1と運転モード判断モデル生成装置2の構成を示す構成図であり、図1と同じ符号を付したものは図1と同様のものである。 FIG. 11 is a configuration diagram showing a configuration of a driving assistance device 1 and a driving mode determination model generation device 2 according to a second embodiment for carrying out the present invention. It is similar.
 運転モード入力部9は、実施の形態1で説明した機能に加え、運転者が提示された運転モードへの切り替えを行わなかったときに、運転モードの切り替えを行わないことを示す信号を、車両通信部11とサーバー通信部12を介して学習用データ追加部42に出力するものである。ここで、運転者が提示された運転モードへの切り替えを行わなかったときというのは、運転者が運転モード入力部9に現在の運転モードでの走行を継続することを入力した場合だけでなく、運転モード提示部8に運転モードの切り替えを促す内容が提示された後、一定時間なにも運転モード入力部9に運転者からの入力がなかった場合を含む。 In addition to the functions described in the first embodiment, the driving mode input unit 9 outputs a signal indicating that the driving mode is not switched when the driver does not switch to the presented driving mode. The data is output to the learning data adding unit 42 via the communication unit 11 and the server communication unit 12. Here, the time when the driver does not switch to the presented driving mode is not only when the driver has input to the driving mode input unit 9 to continue running in the current driving mode, but also This includes the case where there is no input from the driver to the operation mode input unit 9 for a certain period of time after the content prompting the switching of the operation mode is presented to the operation mode presentation unit 8.
 学習用データ追加部42は、運転モード入力部9から運転モードの切り替えを行わないことを示す信号が入力されると、サーバー通信部12と車両通信部11を介して周辺環境情報出力部13、運転者状態情報出力部14、運転制御部10から学習用データの収集を行い、学習用データ記憶部22に学習用データの追加を行うものである。
 追加する学習用データは、例えば、周辺環境検知データ、運転者状態検知データ、運転モードデータのデータセットであり、運転モードデータは、運転モード提示部8により提示された運転モード候補ではなく、現在適用中の運転モードを記録した運転モード情報としての運転モードデータである。その理由は、運転者が運転モードの切り替えを行わなかった場合というのは、運転モード提示部8により提示された運転モード候補が現在の周辺環境と運転者状態に適切でないと運転者により判断された場合であり、提示された運転モードを選択するのに用いられた周辺環境検知データと運転者状態検知データに対応する運転モードデータ、すなわち追加すべき学習用データとしては、提示された運転モード候補は不適切で、現在適用中の運転モードが適切であるからである。
When a signal indicating that switching of the driving mode is not performed is input from the driving mode input unit 9 to the learning data adding unit 42, the surrounding environment information output unit 13 via the server communication unit 12 and the vehicle communication unit 11. The learning data is collected from the driver state information output unit 14 and the driving control unit 10, and the learning data is added to the learning data storage unit 22.
The learning data to be added is, for example, a data set of the surrounding environment detection data, the driver state detection data, and the driving mode data. The driving mode data is not a driving mode candidate presented by the driving mode presentation unit 8 but a current one. This is operation mode data as operation mode information in which an operation mode being applied is recorded. The reason is that the case where the driver has not switched the driving mode is determined by the driver that the driving mode candidate presented by the driving mode presenting unit 8 is not appropriate for the current surrounding environment and driver state. The driving mode data corresponding to the surrounding environment detection data and the driver state detection data used to select the presented driving mode, that is, the learning mode data to be added includes the presented driving mode. This is because the candidates are inappropriate and the currently applied operation mode is appropriate.
 本実施の形態では、学習用データ追加部42は、周辺環境情報出力部13に設けられた周辺環境記述部17から周辺環境検知データ、運転者状態情報出力部14に設けられた運転者状態特徴量抽出部20から運転者状態検知データを得る。周辺環境記述部17は、周辺環境検知部5から入力された周辺環境検知データを一時記憶するメモリを備えている。また、入力される周辺環境検知データは図2の「No.」で示される識別子が付与されており、運転者が提示された運転モードへの切り替えを行わなかったとき、学習用データ追加部42は対応する周辺環境検知データをこの識別子を用いて指定することにより、周辺環境記述部17から読み出すことが出来る。運転者状態特徴量抽出部20も同様に、運転者状態検知部6から入力された運転者状態検知データを一時記憶するメモリが備えられており、入力される運転者状態権利データには図2の「No.」で示される識別子が付与されている。運転者が提示された運転モードへの切り替えを行わなかったとき、学習用データ追加部42は対応する運転者状態検知データをこの識別子を用いて指定することにより、運転者状態特徴量抽出部20から読み出すことが出来る。 In the present embodiment, the learning data adding unit 42 includes the peripheral environment detection data from the peripheral environment description unit 17 provided in the peripheral environment information output unit 13 and the driver state characteristics provided in the driver state information output unit 14. Driver state detection data is obtained from the quantity extraction unit 20. The surrounding environment description section 17 includes a memory for temporarily storing the surrounding environment detection data input from the surrounding environment detecting section 5. The input of the surrounding environment detection data is given an identifier indicated by “No.” in FIG. 2, and when the driver does not switch to the presented driving mode, the learning data adding unit 42 Can be read from the peripheral environment description unit 17 by designating the corresponding peripheral environment detection data using this identifier. Similarly, the driver state characteristic amount extraction unit 20 is provided with a memory for temporarily storing the driver state detection data input from the driver state detection unit 6, and the input driver state right data includes the memory shown in FIG. The identifier indicated by “No.” is given. When the driver does not switch to the presented driving mode, the learning data adding unit 42 specifies the corresponding driver state detection data using this identifier, and thereby the driver state feature amount extracting unit 20. Can be read from
 また、学習用データ追加部42は、現在の運転モードを示す運転モード情報としての運転モードデータを運転制御部10から取得する。ここで、現在の運転モードとは、運転者が運転モードの切り替えを行うか入力するときに、運転制御部10において既に適用されている運転モードである。 (4) The learning data addition unit 42 acquires from the operation control unit 10 operation mode data as operation mode information indicating the current operation mode. Here, the current operation mode is an operation mode that is already applied in the operation control unit 10 when the driver switches or inputs the operation mode.
 本実施の形態では、学習用データ追加部42は、周辺環境検知データを周辺環境記述部17から、運転者状態検知データを運転者状態特徴量抽出部20から、運転モードデータを運転制御部10から取得したが、各データを取得出来れば、データ取得のための構成は上記に限らず、例えば、周辺環境検知データを周辺環境検知部5から、運転者状態検知データを運転者状態検知部6から、運転モードデータを運転モード候補選択部7から取得する構成としてもよい。 In the present embodiment, the learning data adding unit 42 converts the surrounding environment detection data from the surrounding environment description unit 17, the driver state detection data from the driver state feature amount extraction unit 20, and the driving mode data from the driving control unit 10. However, the configuration for data acquisition is not limited to the above as long as each data can be acquired. For example, the surrounding environment detection data from the surrounding environment detection unit 5 and the driver state detection data from the driver state detection unit 6 Thus, the configuration may be such that the operation mode data is acquired from the operation mode candidate selection unit 7.
 本実施の形態における上記以外の運転制御装置3と情報サーバー4の構成、また、周辺環境検知部5と運転者状態検知部6の構成については、実施の形態1における構成と同様であり、本実施の形態における、運転支援装置1と運転モード判断モデル生成装置2は、以上のように構成される。 The configurations of the driving control device 3 and the information server 4 other than those described above in the present embodiment, and the configurations of the surrounding environment detection unit 5 and the driver state detection unit 6 are the same as those in the first embodiment. The driving support device 1 and the driving mode determination model generation device 2 in the embodiment are configured as described above.
 本実施の形態におけるハードウェア構成は図3で示したものと同様であり、実施の形態2で追加される学習用データ追加部42は記憶装置31に記憶されたプログラムが処理装置29で実行されることにより実現され、この機能は、ハードウェアとソフトウェアの組み合わせに限らず、処理装置29に上記プログラムをインプリメントし、ハードウェア単体で実現するようにしてもよい。 The hardware configuration in the present embodiment is the same as that shown in FIG. 3, and the learning data adding unit 42 added in the second embodiment executes the program stored in the storage device 31 by the processing device 29. This function is not limited to the combination of hardware and software, and the above-described program may be implemented in the processing device 29 and may be realized by hardware alone.
 次に、本実施の形態における動作を説明する。
 本実施の形態2も、実施の形態1で示した図4のフローチャートと同様の動作が行われる。ただし、本実施の形態2では、図4のステップS9以降の動作が異なり、ステップS8以降の動作を図12に示す。図4と異なるのは、ステップS13の分岐で「No」へ進むとき、ステップS11の前にステップS201とステップS202の動作が行われることである。
 ステップS13で「No」へと進んだ後、ステップS201で、学習用データ追加部42は、車両通信部11とサーバー通信部12を介して、学習用データの収集を行う。本実施の形態における学習用データは、周辺環境検知データ、運転者状態検知データ、現在の運転モードを表す運転モードデータのデータセットである。周辺環境検知データは周辺環境記述部17から、運転者状態検知データは運転者状態特徴量抽出部20から、運転モードデータは運転制御部10から出力される。
Next, the operation in the present embodiment will be described.
In the second embodiment, the same operation as in the flowchart of FIG. 4 shown in the first embodiment is performed. However, in the second embodiment, operations after step S9 in FIG. 4 are different, and operations after step S8 are shown in FIG. 4 is different from FIG. 4 in that the operation of steps S201 and S202 is performed before step S11 when the operation proceeds to “No” in the branch of step S13.
After proceeding to “No” in step S13, in step S201, the learning data addition unit 42 collects learning data via the vehicle communication unit 11 and the server communication unit 12. The learning data in the present embodiment is a data set of surrounding environment detection data, driver state detection data, and operation mode data representing the current operation mode. The surrounding environment detection data is output from the surrounding environment description unit 17, the driver state detection data is output from the driver state feature amount extraction unit 20, and the operation mode data is output from the operation control unit 10.
 ステップS202では、ステップS201で収集した学習用データを学習用データ追加部42が、学習用データ記憶部22へ出力し、学習用データ記憶部22は入力された学習用データを記憶する。
 ステップS202が行われた後、ステップS11に進む。ステップS11での動作は実施の形態1における動作と同様である。
In step S202, the learning data addition unit 42 outputs the learning data collected in step S201 to the learning data storage unit 22, and the learning data storage unit 22 stores the input learning data.
After step S202 is performed, the process proceeds to step S11. The operation in step S11 is the same as the operation in the first embodiment.
 以上の動作により、本実施の形態2においては、運転者の判断にそぐわない運転モード候補の提示を行ったときのデータセットを運転者が判断した運転モードとセットにして学習用データとして学習用データ記憶部22に追加することにより、次回生成される運転モード判断モデルの精度を向上し、その運転者により適した運転支援を行うことが出来る。 By the above operation, in the second embodiment, the data set when the driving mode candidates that do not match the driver's judgment are presented are set together with the driving mode determined by the driver, and the learning data is used as the learning data. By adding to the storage unit 22, the accuracy of the driving mode determination model generated next time can be improved, and driving assistance more suitable for the driver can be performed.
 本実施の形態では、追加する学習用データとして、周辺環境検知データと運転者状態検知データを想定したが、運転者が運転モードの切り替えを拒否したタイミングでの周辺環境情報と運転者状態情報のデータセットであれば、これら検知データでなくてもよく、例えば、周辺環境情報として数値ベクトルを、運転者状態情報として運転者状態特徴量情報をデータセットとして記憶してもよい。 In the present embodiment, the surrounding environment detection data and the driver state detection data are assumed as the learning data to be added, but the surrounding environment information and the driver state information at the timing when the driver refuses to switch the driving mode are described. As long as it is a data set, it is not necessary to use these detection data. For example, a numerical vector may be stored as surrounding environment information, and driver state feature amount information may be stored as driver state information as a data set.
 本発明に係る運転支援装置と運転モード判断モデル生成装置は、車両の自動運転システムに適用可能である。 The driving support device and the driving mode determination model generation device according to the present invention are applicable to an automatic driving system for a vehicle.
1 運転支援装置、2 運転モード判断モデル生成装置、3 運転制御装置、4 情報サーバー、5 周辺環境検知部、6 運転者状態検知部、7 運転モード候補選択部、8 運転モード提示部、9 運転モード入力部、10 運転制御部、11 車両通信部、12 サーバー通信部、13 周辺環境情報出力部、14 運転者状態情報出力部、15 運転モード評価部、16 判断モデル記憶部、17 周辺環境記述部、18 形態素抽出部、19 数値ベクトル化部、20 運転者状態特徴量抽出部、21 判断モデル記憶部、22 学習用データ記憶部、23 周辺環境記述部、24 形態素抽出部、25 数値ベクトル化部、26 運転者状態特徴量抽出部、27 運転モード判断モデル生成部、28 処理装置、29 処理装置、30 記憶装置、31 記憶装置、32 入力装置、33 入力装置、34 入力装置、35 出力装置、36 通信装置、37 通信装置、38 車両、39 一般道、40 歩行者、41 形態素、42 学習用データ追加部 1 driving support device, 2 driving mode judgment model generating device, 3 driving control device, 4 information server, 5 surrounding environment detecting unit, 6 driver status detecting unit, 7 driving mode candidate selecting unit, 8 driving mode presenting unit, 9 driving Mode input unit, 10 driving control unit, 11 vehicle communication unit, 12 server communication unit, 13 surrounding environment information output unit, 14 driver status information output unit, 15 driving mode evaluation unit, 16 judgment model storage unit, 17 surrounding environment description Unit, 18 morpheme extraction unit, 19 numerical vectorization unit, 20 driver state feature quantity extraction unit, 21 judgment model storage unit, 22 learning data storage unit, 23 peripheral environment description unit, 24 morpheme extraction unit, 25 numeric vectorization Unit, 26 driver state feature quantity extraction unit, 27 driving mode judgment model generation unit, 28 processing unit, 29 processing unit 30 storage device, 31 storage device, 32 input device, 33 input device, 34 input device, 35 output device, 36 communication device, 37 communication device, 38 vehicle, 39 general road, 40 pedestrian, 41 morpheme, 42 learning data Additional part

Claims (10)

  1. 車両の周辺環境を表す周辺環境情報を出力する周辺環境情報出力部と、
    前記周辺環境に対応した運転者の状態を表す運転者状態情報を出力する運転者状態情報出力部と、
    前記周辺環境情報出力部が出力した前記周辺環境情報と前記運転者状態情報出力部が出力した前記運転者状態情報との組み合わせに基づいて、
    前記周辺環境情報と前記運転者状態情報の組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出する運転モード評価部とを
    備えた運転支援装置。
    A surrounding environment information output unit that outputs surrounding environment information representing the surrounding environment of the vehicle;
    A driver state information output unit that outputs driver state information indicating a state of the driver corresponding to the surrounding environment;
    Based on a combination of the surrounding environment information output by the surrounding environment information output unit and the driver state information output by the driver state information output unit,
    A driving support device comprising: a driving mode evaluation unit that calculates an evaluation value indicating aptitude of automatic driving and manual driving with respect to a combination of the surrounding environment information and the driver state information.
  2. 車両の周辺環境と運転者の状態それぞれの、自動運転と手動運転それぞれに対する影響度を示す運転モード判断モデルを記憶する判断モデル記憶部を備え、
    前記運転モード評価部は、前記周辺環境情報出力部が出力した前記周辺環境情報と前記運転者状態情報出力部が出力した前記運転者状態情報それぞれに対応する前記影響度を前記判断モデル記憶部に記憶された前記運転モード判断モデルから参照し、前記影響度と前記周辺環境情報と前記運転者状態情報との組み合わせに基づいて、前記評価値を算出することを特徴とする請求項1に記載の運転支援装置。
    Each of the surrounding environment of the vehicle and the state of the driver includes a judgment model storage unit that stores a driving mode judgment model indicating the degree of influence on each of the automatic driving and the manual driving,
    The driving mode evaluation unit stores the influence degree corresponding to each of the surrounding environment information output by the surrounding environment information output unit and the driver state information output by the driver state information output unit in the determination model storage unit. 2. The evaluation value according to claim 1, wherein the evaluation value is calculated based on a combination of the degree of influence, the surrounding environment information, and the driver state information with reference to the stored driving mode determination model. 3. Driving support device.
  3. 前記周辺環境情報出力部は、車両の周辺環境を検知して得られた周辺環境検知データに基づいて、自然言語で前記周辺環境を記述した文字列情報を生成する周辺環境記述部と、
    前記周辺環境記述部で生成された前記文字列情報を形態素解析して形態素を抽出して前記周辺環境情報として出力する形態素抽出部とを備えることを特徴とする請求項2に記載の運転支援装置。
    The surrounding environment information output unit, based on surrounding environment detection data obtained by detecting the surrounding environment of the vehicle, a surrounding environment description unit that generates character string information describing the surrounding environment in a natural language,
    The driving support apparatus according to claim 2, further comprising: a morpheme extraction unit that morphologically analyzes the character string information generated by the surrounding environment description unit, extracts a morpheme, and outputs the morpheme as the surrounding environment information. .
  4. 前記周辺環境情報出力部は、車両の周辺環境を検知して得られた周辺環境検知データに基づいて、自然言語で前記周辺環境を記述した文字列情報を生成する周辺環境記述部と、
    前記周辺環境記述部で生成された前記文字列情報を形態素解析して形態素を抽出する形態素抽出部と、
    前記形態素で抽出された前記形態素を数値ベクトルに変換し、前記周辺環境情報として出力する数値ベクトル化部とを備えることを特徴とする請求項2に記載の運転支援装置。
    The surrounding environment information output unit, based on surrounding environment detection data obtained by detecting the surrounding environment of the vehicle, a surrounding environment description unit that generates character string information describing the surrounding environment in a natural language,
    A morpheme extraction unit that extracts a morpheme by morphologically analyzing the character string information generated by the surrounding environment description unit;
    The driving assistance device according to claim 2, further comprising: a numerical vector conversion unit that converts the morpheme extracted by the morpheme into a numerical vector and outputs the converted vector as the surrounding environment information.
  5. 車両の周辺環境を表す周辺環境情報、前記周辺環境に対応した運転者状態を表す運転者状態情報、前記周辺環境と前記運転者状態に対応した運転モードを示す運転モード情報を対応づけて学習用データとして記憶する学習用データ記憶部と、
    前記学習用データ記憶部に記憶された前記周辺環境情報、前記運転者状態情報および前記運転モード情報に基づいて、車両の周辺環境と運転者の状態の自動運転と手動運転のそれぞれに対する影響度を示す運転モード判断モデルを生成する運転モード判断モデル生成部とを
    備えた運転モード判断モデル生成装置。
    For learning by associating surrounding environment information indicating a surrounding environment of a vehicle, driver state information indicating a driver state corresponding to the surrounding environment, and driving mode information indicating a driving mode corresponding to the driver state with the surrounding environment. A learning data storage unit for storing as data,
    Based on the surrounding environment information, the driver state information, and the driving mode information stored in the learning data storage unit, the degree of influence of the surrounding environment of the vehicle and the state of the driver on each of the automatic driving and the manual driving is determined. An operation mode determination model generation device, comprising: an operation mode determination model generation unit that generates the operation mode determination model shown in FIG.
  6. 前記学習用データ記憶部に記憶された前記運転者状態情報が、運転者の状態を特徴量で示す運転者状態特徴量情報であり、前記運転モード判断モデル生成部は運転モード判断モデルを生成する際の運転者状態情報として前記運転者状態特徴量情報を用いることを特徴とする請求項5に記載の運転モード判断モデル生成装置。
    The driver state information stored in the learning data storage unit is driver state characteristic amount information indicating a driver state by a characteristic amount, and the driving mode determination model generation unit generates a driving mode determination model. The driving mode determination model generation device according to claim 5, wherein the driver state feature amount information is used as the driver state information at the time.
  7. 前記学習用データ記憶部に記憶された前記運転者状態情報が、運転者の状態をセンサーで検知した運転者状態検知データであり、
    前記学習用データ記憶部に記憶された前記運転者状態検知データに基づいて、運転者の状態を特徴量で示す運転者状態特徴量情報を抽出する運転者状態特徴量情報抽出部を備え、
    前記運転モード判断モデル生成部は前記運転者状態特徴量情報抽出部で抽出された前記運転者状態特徴量情報と前記学習用データ記憶部に記憶された前記周辺環境情報と前記運転モード情報に基づいて、前記運転モード判断モデルを生成することを特徴とする請求項5に記載の運転モード判断モデル生成装置。
    The driver state information stored in the learning data storage unit is driver state detection data in which the state of the driver is detected by a sensor,
    Based on the driver state detection data stored in the learning data storage unit, a driver state feature amount information extraction unit that extracts driver state feature amount information indicating the state of the driver by a feature amount,
    The driving mode determination model generation unit is based on the driver state characteristic amount information extracted by the driver state characteristic amount information extraction unit, the surrounding environment information and the driving mode information stored in the learning data storage unit. The driving mode determination model generation device according to claim 5, wherein the driving mode determination model is generated.
  8. 前記学習用データ記憶部に記憶された前記周辺環境情報が、車両の周辺環境をセンサーで検知した周辺環境検知データであり、
    前記学習用データ記憶部に記憶された前記周辺環境検知データに基づいて、自然言語で前記周辺環境を記述した文字列情報を生成する周辺環境記述部と、
    前記周辺環境記述部で生成された前記文字列情報を形態素解析して形態素を抽出する形態素抽出部とを備え、
    前記運転モード判断モデル生成部は前記形態素抽出部で抽出された前記形態素と前記学習用データ記憶部に記憶された前記運転者状態情報と前記運転モード情報に基づいて、前記運転モード判断モデルを生成することを特徴とする請求項5に記載の運転モード判断モデル生成装置。
    The surrounding environment information stored in the learning data storage unit is surrounding environment detection data in which a surrounding environment of the vehicle is detected by a sensor,
    A surrounding environment description unit that generates character string information describing the surrounding environment in a natural language based on the surrounding environment detection data stored in the learning data storage unit;
    A morpheme extraction unit that extracts a morpheme by morphologically analyzing the character string information generated by the surrounding environment description unit,
    The driving mode determination model generation unit generates the driving mode determination model based on the morpheme extracted by the morpheme extraction unit, the driver state information stored in the learning data storage unit, and the driving mode information. The operation mode determination model generation device according to claim 5, wherein
  9. 前記形態素抽出部で抽出した前記形態素を数値ベクトルに変換する数値ベクトル化部を備え、
    前記運転モード判断モデル生成部は前記数値ベクトル化部で生成された前記数値ベクトルと前記学習用データ記憶部に記憶された前記運転者状態情報と前記運転モード情報に基づいて、前記運転モード判断モデルを生成することを特徴とする請求項8に記載の運転モード判断モデル生成装置。
    A numerical vectorization unit that converts the morpheme extracted by the morphological extraction unit into a numerical vector,
    The driving mode determination model generation unit is configured to execute the driving mode determination model based on the numerical value vector generated by the numerical vectorization unit, the driver state information and the driving mode information stored in the learning data storage unit. The operation mode determination model generation device according to claim 8, wherein
  10. 車両の周辺環境を表す周辺環境情報を出力する周辺環境情報出力部と、
    前記周辺環境に対応した運転者の状態を表す運転者状態情報を出力する運転者状態情報出力部と、
    車両周辺環境と運転者状態それぞれの、自動運転と手動運転それぞれの運転モードに対する影響度を示す運転モード判断モデルを記憶する判断モデル記憶部と、
    前記周辺環境情報出力部が出力した前記周辺環境情報と前記運転者状態情報出力部が出力した前記運転者状態情報それぞれに対応する前記影響度を前記判断モデル記憶部に記憶された前記運転モード判断モデルから参照し、前記影響度と前記周辺環境情報と前記運転者状態情報との組み合わせに基づいて、前記周辺環境情報と前記運転者状態情報の組み合わせに対する自動運転と手動運転の適性度を表す評価値を算出する運転モード評価部と、
    前記運転モード評価部で算出された前記評価値に基づいて、前記周辺環境情報と前記運転者状態情報に対して適切な運転モード候補を選択する運転モード候補選択部と、
    前記運転モード候補選択部で選択された運転モード候補に基づいて運転支援情報を運転者に提示する運転モード提示部と、
    前記運転モード提示部で提示された運転支援情報に基づいて、運転者が運転モードの切り替えを行うか否かを入力する運転モード入力部と、
    前記運転モード入力部に切り替えを行わないことを示す信号が入力された場合に、前記周辺環境情報出力部が出力した前記周辺環境情報と前記運転者状態情報出力部が出力した前記運転者状態情報と現在の運転モードを示す運転モード情報を前記学習用データ記憶部に追加する学習用データ追加部とを
    備えることを特徴とする請求項5~請求項9のいずれかに記載の運転モード判断モデル生成装置。
    A surrounding environment information output unit that outputs surrounding environment information representing the surrounding environment of the vehicle;
    A driver state information output unit that outputs driver state information indicating a state of the driver corresponding to the surrounding environment;
    A judgment model storage unit that stores a driving mode judgment model indicating the degree of influence on the driving mode of each of the automatic driving and the manual driving of the vehicle surrounding environment and the driver state,
    The driving mode determination in which the degree of influence corresponding to each of the peripheral environment information output by the peripheral environment information output unit and the driver state information output by the driver state information output unit is stored in the determination model storage unit. An evaluation representing the suitability of automatic driving and manual driving for the combination of the surrounding environment information and the driver state information based on a combination of the degree of influence, the surrounding environment information, and the driver state information with reference to a model. An operation mode evaluation unit for calculating a value,
    A driving mode candidate selection unit that selects an appropriate driving mode candidate for the surrounding environment information and the driver state information based on the evaluation value calculated by the driving mode evaluation unit,
    A driving mode presenting unit that presents driving assistance information to the driver based on the driving mode candidate selected by the driving mode candidate selecting unit,
    A driving mode input unit that inputs whether or not the driver switches the driving mode based on the driving support information presented by the driving mode presentation unit,
    When a signal indicating that switching is not performed is input to the operation mode input unit, the surrounding environment information output by the surrounding environment information output unit and the driver state information output by the driver state information output unit The driving mode determination model according to any one of claims 5 to 9, further comprising: a learning data adding unit that adds driving mode information indicating a current driving mode to the learning data storage unit. Generator.
PCT/JP2018/024277 2018-06-27 2018-06-27 Driving assistance device and driving mode assessment model generation device WO2020003392A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2018/024277 WO2020003392A1 (en) 2018-06-27 2018-06-27 Driving assistance device and driving mode assessment model generation device
JP2020523817A JP6746043B2 (en) 2018-06-27 2018-06-27 Driving support device and driving mode judgment model generation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/024277 WO2020003392A1 (en) 2018-06-27 2018-06-27 Driving assistance device and driving mode assessment model generation device

Publications (1)

Publication Number Publication Date
WO2020003392A1 true WO2020003392A1 (en) 2020-01-02

Family

ID=68986672

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/024277 WO2020003392A1 (en) 2018-06-27 2018-06-27 Driving assistance device and driving mode assessment model generation device

Country Status (2)

Country Link
JP (1) JP6746043B2 (en)
WO (1) WO2020003392A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2021149095A1 (en) * 2020-01-20 2021-07-29
CN114077218A (en) * 2022-01-19 2022-02-22 浙江吉利控股集团有限公司 Road data evaluation report generation method, device, equipment and storage medium
WO2024202905A1 (en) * 2023-03-31 2024-10-03 本田技研工業株式会社 Control device and control method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017024521A (en) * 2015-07-21 2017-02-02 株式会社デンソー Drive assist control device
JP2018025919A (en) * 2016-08-09 2018-02-15 株式会社東芝 Information processor, information processing method, and mobile body

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017024521A (en) * 2015-07-21 2017-02-02 株式会社デンソー Drive assist control device
JP2018025919A (en) * 2016-08-09 2018-02-15 株式会社東芝 Information processor, information processing method, and mobile body

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPWO2021149095A1 (en) * 2020-01-20 2021-07-29
JP7561774B2 (en) 2020-01-20 2024-10-04 三菱電機株式会社 Mobility support device and mobility support method
CN114077218A (en) * 2022-01-19 2022-02-22 浙江吉利控股集团有限公司 Road data evaluation report generation method, device, equipment and storage medium
CN114077218B (en) * 2022-01-19 2022-04-22 浙江吉利控股集团有限公司 Road data evaluation report generation method, device, equipment and storage medium
WO2023137863A1 (en) * 2022-01-19 2023-07-27 浙江吉利控股集团有限公司 Method, apparatus and device for generating road data evaluation report, and storage medium
WO2024202905A1 (en) * 2023-03-31 2024-10-03 本田技研工業株式会社 Control device and control method

Also Published As

Publication number Publication date
JP6746043B2 (en) 2020-08-26
JPWO2020003392A1 (en) 2020-07-30

Similar Documents

Publication Publication Date Title
US11361452B2 (en) Information processing apparatus, control method, and program
US11593588B2 (en) Artificial intelligence apparatus for generating training data, artificial intelligence server, and method for the same
CN111033512B (en) Motion control device for communicating with autonomous traveling vehicle based on simple two-dimensional planar image pickup device
JP6341311B2 (en) Real-time creation of familiarity index for driver&#39;s dynamic road scene
US10922566B2 (en) Cognitive state evaluation for vehicle navigation
JP2022523730A (en) Neural network-based navigation of autonomous vehicles sewn between traffic entities
US11823020B2 (en) Artificial intelligence apparatus for generating training data for artificial intelligence model and method thereof
WO2019087561A1 (en) Inference device, inference method, program, and persistent tangible computer-readable medium
EP3994426B1 (en) Method and system for scene-aware interaction
JP6746043B2 (en) Driving support device and driving mode judgment model generation device
JP2019523943A (en) Control apparatus, system and method for determining perceptual load of visual and dynamic driving scene
WO2016063742A1 (en) Information presentation device, method, and computer program product
CN106663260B (en) Information presentation device, method, and program
Weyers et al. Action and object interaction recognition for driver activity classification
Henning et al. The quality of behavioral and environmental indicators used to infer the intention to change lanes
JP5360143B2 (en) Driving scene recognition model generation device, driving support device, and program
CN111872928B (en) Obstacle attribute distinguishing method and system and intelligent robot
US20230051467A1 (en) Determining Features based on Gestures and Scale
JP7309095B2 (en) Listener Estimation Apparatus, Listener Estimation Method, and Listener Estimation Program
CN118201554A (en) Cognitive function assessment system and training method
CN117608659A (en) Control method and device for virtual vehicle cabin, readable storage medium and vehicle
Xing Driver lane change intention inference using machine learning methods.
Qu Study and Analysis of Machine Learning Techniques for Detection
CN112949429A (en) Visual auxiliary method, system and device
CN117499746A (en) Video processing method, device and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18924082

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020523817

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18924082

Country of ref document: EP

Kind code of ref document: A1