WO2016116992A1 - Speech learning system and speech learning method - Google Patents

Speech learning system and speech learning method Download PDF

Info

Publication number
WO2016116992A1
WO2016116992A1 PCT/JP2015/006369 JP2015006369W WO2016116992A1 WO 2016116992 A1 WO2016116992 A1 WO 2016116992A1 JP 2015006369 W JP2015006369 W JP 2015006369W WO 2016116992 A1 WO2016116992 A1 WO 2016116992A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
vehicle
time
program
speech
Prior art date
Application number
PCT/JP2015/006369
Other languages
French (fr)
Japanese (ja)
Inventor
典子 加藤
Original Assignee
株式会社デンソー
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015152126A external-priority patent/JP6443257B2/en
Application filed by 株式会社デンソー filed Critical 株式会社デンソー
Priority to US15/542,810 priority Critical patent/US11164472B2/en
Publication of WO2016116992A1 publication Critical patent/WO2016116992A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/04Electrically-operated educational appliances with audible presentation of the material to be studied

Definitions

  • the present disclosure relates to a speech learning system and a speech learning method for a user in a vehicle to learn by speech.
  • ⁇ ⁇ Voice learning technology is known that enables learning even when it is difficult to see text, etc., by providing learning content to users by voice. If this speech learning technique is used, it is possible to learn using time such as driving the vehicle.
  • Such audio learning content is generally provided by a program having a predetermined learning time (for example, 1 hour).
  • a technique has been proposed that enables a learning user to arbitrarily set his / her learning time (Patent Document 1).
  • the present disclosure has been made in view of the above-described problems of the related art, and an object thereof is to provide a speech learning system and a speech learning method that can promote continuous learning in a vehicle.
  • a speech learning system is a system that is applied to a vehicle and provides learning content to a user in the vehicle by speech, and stores a plurality of learning elements constituting the learning content
  • a storage unit, a boarding time estimation unit that estimates a boarding time in which the user has been on the vehicle, and a learning program that ends within the boarding time estimated by the boarding time estimation unit include a plurality of learning elements.
  • a learning program generation unit that generates a combination from the inside and an execution unit that executes the learning program are provided.
  • the user can start learning without delaying the time required for learning. Can be encouraged.
  • the learning time is set according to the boarding time, it is not necessary for the driver to make a decision.
  • there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
  • a speech learning method is a method that is applied to a vehicle and provides learning content by voice to a user in the vehicle, and estimates a boarding time for which the user has been on the vehicle. And generating a learning program for one time that ends within the boarding time by combining from a plurality of pre-stored learning elements constituting the learning content, and executing the learning program.
  • FIG. 1 is an explanatory diagram illustrating a configuration of a speech learning system according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating a learning program generation process according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a learning program execution process according to an embodiment of the present disclosure.
  • FIG. 1 is an explanatory diagram illustrating a configuration of a speech learning system according to an embodiment of the present disclosure.
  • FIG. 2 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the embodiment of the present disclosure.
  • FIG. 3 is a flowchart illustrating a learning program generation process according to an embodiment of the present disclosure.
  • FIG. 4 is a flowchart illustrating a learning program execution process according to an embodiment of the present disclosure.
  • FIG. 5A is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns a language according to the speech learning control process of an embodiment of the present disclosure
  • FIG. 5B is a diagram showing learning objectives and learning contents of each step in the example of language learning shown in FIG. 5A.
  • FIG. 6 is an explanatory diagram illustrating a configuration of the speech learning system according to the first modified example of the present disclosure.
  • FIG. 7 is a flowchart illustrating a learning program generation process executed by the speech learning system according to the first modified example of the present disclosure.
  • FIG. 8 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the first modification of the present disclosure.
  • FIG. 9 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the second modified example of the present disclosure.
  • FIG. 10 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the second modified example of the present disclosure.
  • FIG. 1 shows the configuration of a speech learning system 10 of the present embodiment.
  • the voice learning system 10 according to the present embodiment is mounted on a vehicle and provides learning content by voice to a user (for example, a driver) in the vehicle.
  • the speech learning system 10 includes a movement schedule acquisition unit 11, a boarding time estimation unit 12, a movement history storage unit 13, a learning element storage unit 14, a learning program generation unit 15, and a notification unit 16.
  • the moving schedule acquisition unit 11 acquires the starting point and destination of the vehicle that starts moving. For example, when a destination is set in a navigation system (not shown) mounted on the vehicle, the movement schedule acquisition unit 11 acquires the set destination and the current location of the vehicle as the departure point.
  • the boarding time estimation unit 12 estimates the time required to move from the departure place to the destination as the boarding time (that is, the time restrained in the vehicle) based on the travel schedule acquired by the travel schedule acquisition unit 11.
  • the movement history storage unit 13 stores a history of movement of the vehicle in the past along with the date, day of the week, time, and the like.
  • the boarding time estimator 12 is based on the driver's habitual behavior stored as a history in the movement history storage unit 13, and determines the boarding time as the driver is waiting in the vehicle that has moved to a specific location. Estimate as
  • the learning element storage unit 14 stores a plurality of learning elements constituting speech learning content.
  • the speech learning system 10 according to the present embodiment provides content for learning languages (for example, a user whose native language is Japanese learns English), and the learning element storage unit 14 stores language learning language. A large number of words and short phrases are stored in advance as learning elements constituting the content.
  • the learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and stores a learning program for one time that ends in accordance with the learning time. Generate by combining elements.
  • This learning program includes a plurality of learning elements whose total learning time is shorter than the boarding time, and is also referred to as a learning target set or a learning target course.
  • the notification unit 16 notifies the user in the vehicle of the boarding time estimated by the boarding time estimation unit 12 and the learning time of the learning program generated by the learning program generation unit 15.
  • the execution unit 17 is connected to an operation switch 21 that can be operated by the user. When a start request operation is performed by the operation switch 21, the execution program 17 executes the learning program generated by the learning program generation unit 15 and executes the learning program from the speaker 22. Output audio.
  • the learning history storage unit 18 stores the learning program executed by the execution unit 17 (that is, the executed learning element).
  • the load information acquisition unit 19 acquires information (hereinafter referred to as load information) for estimating the driving load of the vehicle driver.
  • the driving load estimation unit 20 estimates the driving load of the driver based on the acquired load information.
  • the load information the movement history of the movement history storage unit 13 is acquired, and if the current movement is a movement familiar to the driver such as commuting, the driving load is estimated to be lower than a predetermined load. If the driver is not familiar with the movement, the driving load is estimated to be higher than the predetermined load.
  • load information map information of the travel route from the departure point to the destination is acquired, and if it is a section that requires attention such as a section with a lot of traffic or a section with a curve, the driving load is more than the predetermined load. Estimated high.
  • generation part 15 produces
  • the driving load estimation unit 20 estimates a real-time driving load while the vehicle is moving.
  • the driving load is higher than a predetermined load based on the fact that information accompanied by an alarm such as the approach of an obstacle is acquired from a camera or sensor that monitors the surroundings of the vehicle as load information during movement.
  • the driving load is higher than the predetermined load based on the fact that the information indicating the sudden braking or the sudden handle is acquired from the operation unit such as the accelerator, the brake, or the handle.
  • the driving load is higher than the predetermined load on the assumption that the driver has no margin when the movement amount of the line of sight decreases. .
  • the execution part 17 will come to interrupt execution of a learning program, if it is estimated that a driver
  • FIG. 2 shows a flowchart of the speech learning control process executed by the speech learning system 10 of this embodiment.
  • the speech learning control process (S100) is started when the user activates the speech learning system 10. It may be started in synchronism with the start of the vehicle engine.
  • the voice learning control process (S100) is started, first, it is determined whether or not a departure place and a destination are acquired as a moving schedule of the vehicle (S101).
  • the destination and the departure place for example, the current location
  • the time required to travel from the departure place to the destination is estimated as the boarding time. (S102).
  • the movement schedule has not been acquired (S101: NO)
  • the movement history of the vehicle together with the date, day of the week, and time, it is possible to predict the habitual behavior of the driver. For example, if it is customary for a mother who drives a vehicle to deliver a child to a cram school and waits until it finishes at a cram school parking lot, the cram school parking lot is registered as a specific place.
  • the waiting time in a parking lot of a cram school is estimated as boarding time based on the action prediction from a movement history (S104).
  • a learning time shorter than the estimated boarding time is set (S105).
  • a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set. For example, when the boarding time is estimated to be 50 minutes, the margin time is 10 minutes. Is subtracted and the learning time is set to 40 minutes. The reason for providing the extra time will be described later.
  • learning program generation process a process for generating a learning program in accordance with the learning time (hereinafter referred to as learning program generation process) is started (S106).
  • learning program generation process a process for generating a learning program in accordance with the learning time.
  • a large number of words and short phrases are stored in advance as learning elements constituting the content for language learning, and a learning program is generated by combining them.
  • the learning time can be adjusted according to the number of learning elements to be selected, individual reproduction times, the number of reproduction repeats, and the like.
  • FIG. 3 shows a flowchart of the learning program generation process of this embodiment.
  • the learning program generation process S106
  • a learning program mainly composed of already-executed learning elements (here, words and phrases) is generated (S124).
  • the learning program is composed mainly of learning elements that have already been executed to reduce the difficulty of learning, so that the driver can review learning while paying attention to driving. It is possible to continue.
  • the portion reproduced in the section in which the driving load is estimated to be high is mainly composed of the already executed learning elements. You may do it.
  • the manner of reducing the difficulty level of learning is not limited to using the learning elements that have already been executed as the subject, but the number of reproduction repeats may be increased.
  • the portion that is reproduced in the section in which the driving load is estimated to be low may be configured mainly by unexecuted learning elements.
  • the review program is generated (S126).
  • a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set, and after the learning program matching the learning time is completed, the margin time is secured. It is supposed to leave. Then, it is possible to execute a review program that reviews this learning using this spare time.
  • the review program is generated by combining the learning elements included in the learning program generated in S124 or S125 so as to end within the spare time.
  • the voice learning control process when returning from the learning program generation process (S106), the boarding time estimated in S102 or S104 and the learning time set in S105 are notified to the user in the vehicle (S107).
  • This notification may be performed by voice or by displaying on a display unit (not shown).
  • a learning program execution process a process for executing a learning program (hereinafter referred to as a learning program execution process) is started (S109).
  • the learning program execution process is started in response to the user's start request operation.
  • the learning program execution process may be automatically started. . This eliminates the need for the user to make a decision at the start of learning, and can encourage the user who is restrained in the vehicle to learn while moving.
  • FIG. 4 shows a flowchart of the learning program execution process of the present embodiment.
  • the learning program execution process S109
  • the learning program generated in S124 or S125 of FIG. 3 is started (S131).
  • sound is output from the speaker 22 according to the learning program.
  • the vehicle is moving (S132). If the vehicle is moving (S132: YES), real-time load information is acquired (S133), and it is determined whether the driving load estimated based on the load information is high (S134). For example, when information with an alarm such as approach of an obstacle is acquired and it is estimated that the driving load is high (S134: YES), the execution of the learning program is interrupted (S135). In situations where the driving load is temporarily high, since the driver is focusing on driving and there is no room for learning, priority is given to ensuring safety by temporarily stopping learning.
  • the contents of the learning program can be modified according to the interruption time by reducing the number of learning elements included in the learning program or reducing the number of playback repeats. Is possible.
  • FIG. 5A schematically shows an example in which the driver in the vehicle learns language according to the above-described voice learning control process (S100) of the present embodiment.
  • the horizontal axis in the figure indicates the flow of time, and proceeds in the right direction.
  • the travel time (T11) is set as a travel time (T11), and a learning time (T12) shorter than the travel time by a predetermined margin time (T13) is set.
  • a learning program is generated.
  • the learning program is composed of four stages of steps 1 to 4, with the learning target being to learn the climax part of the Western lyrics.
  • a learning program is executed when the vehicle starts to move, and the driver learns in order from step 1.
  • step 1 some unlearned words are extracted from the rust portion based on the driver's learning history, and the pronunciation and meaning of these words are lectured. Then, the driver repeats pronunciation of words according to voice guidance.
  • the inside of the vehicle is a private space that is isolated from the surroundings compared to the inside of a train or house, and if there is only one driver in the vehicle, you can make a loud voice without worrying about others, It's a great place to practice.
  • the pronunciation of the word by the driver is recorded and is played back continuously. By listening and checking your own pronunciation, you can enhance the pronunciation learning effect.
  • step 2 a short phrase including the words learned in step 1 is pronounced and its translation is lectured.
  • the driver repeatedly practiced the pronunciation of short phrases and listened to the recorded pronunciation.
  • Step 3 the short phrases learned in Step 2 are joined together to gradually make longer phrases.
  • the driver repeats the practice of pronunciation of phrases that are gradually lengthened, and listens to the recorded pronunciation to check.
  • the driver practice while singing the entire chorus part along with the accompaniment, and listen to the recorded pronunciation to check.
  • step 4 of the learning program the singer part is sung and practiced along with the accompaniment, and the driver can review this study to increase the level of acquisition. .
  • a user for example, a driver estimates a boarding time in which the user is restrained in the vehicle, and generates a learning program having a learning time shorter than the boarding time.
  • the learning time is set according to the boarding time, it is not necessary for the driver to make a decision.
  • there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
  • the learning time is set shorter than the boarding time by a predetermined margin time, so that the margin time is secured after the learning program is finished. Can be executed. By executing the review program for users who have increased willingness to learn through the learning program, the sense of achievement of learning can be further enhanced. Thereby, motivation for the next learning can be improved, and learning in the vehicle can be continued.
  • the front of the vehicle is monitored by a radar or the like, and if there is no preceding vehicle, the speed is adjusted to the set speed.
  • Technology that maintains distance so-called adaptive cruise control: ACC
  • technology that recognizes the lane based on the front image taken by the camera and controls the steering so that it travels along the lane so-called lane keep assist)
  • Etc Etc.
  • the modified speech learning system 10 mounted on a vehicle having an automatic driving function will be described with a focus on differences from the above-described embodiment.
  • the same components as those in the above-described embodiment are denoted by the same reference numerals and the description thereof is omitted.
  • FIG. 6 shows the configuration of the speech learning system 10 of the first modification.
  • the voice learning system 10 of the first modification includes an automatic driving possible section estimation unit 23 instead of the load information acquisition unit 19 and the driving load estimation unit 20 of the voice learning system 10 of the above-described embodiment.
  • the automatic driving possible section estimation unit 23 is also a conceptual classification of the speech learning system 10 focusing on functions, and does not necessarily exist physically independently.
  • the automatic operation possible section estimation unit can be configured by various devices, electronic components, integrated circuits, computers, computer programs, or combinations thereof.
  • the automatic operation possible section estimation unit 23 satisfies a predetermined condition between the departure point and the destination (hereinafter, referred to as “automatic driving”). Estimate the automatic operation possible section).
  • specific road types such as an expressway and a car-only road are defined as predetermined conditions for enabling automatic driving. For example, the vehicle travels on an expressway between the planned departure place and the destination. If there is a section to be operated, the section is estimated as an automatically operable section.
  • the highway is suitable for automatic driving because it eliminates intersections and gently designs curves so that it can travel at high speeds and has fewer speed fluctuations and sharp steering than ordinary roads.
  • the learning program generation unit 15 of the first modified example has a learning time shorter than the boarding time. Set. And when the user who learns is a driver
  • FIG. 7 shows a flowchart of a learning program generation process executed by the speech learning system 10 of the first modification.
  • the learning program generation process (S106) of the first modification first, it is determined whether or not the learning is performed while the vehicle is moving (S141), and the learning is not performed while the vehicle is moving, that is, waiting at a specific place. If the learning is to be performed in the middle (S141: NO), it is possible to concentrate on the learning, so a learning program mainly composed of unexecuted learning elements is generated based on the stored learning history (S142). ).
  • the automatic driving possible section is estimated based on the movement schedule (S143).
  • expressways are defined as specific types of roads that can be driven automatically.
  • driving on expressways between the departure point and the destination it is possible to automatically drive the driving section of the expressway. Estimate as interval.
  • the learning program for the portion corresponding to the manual operation section is mainly executed between the departure point and the destination.
  • the learning program of the part corresponding to the automatic driving possible section is mainly configured with the unexecuted learning elements (S147).
  • the driver's burden is greatly reduced by automatically operating the accelerator, brakes and steering wheel, so the driver can focus on learning. Therefore, the subject of the learning element is changed between the manual driving section and the automatic driving enabled section, and the difficulty of learning in the automatic driving enabled section is increased as compared with the manual driving section.
  • the driving load of the driver is not estimated, and if the learning program is started in the subsequent learning program execution process (S109), the learning is performed regardless of whether the driver is moving or not. Wait until the program is finished, and when all the learning programs are finished, the learning program execution process (S109) is finished.
  • FIG. 8 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the first modified example.
  • the horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
  • the travel time (T21) that is shorter than the boarding time by a predetermined margin time (T23) is set with the required time to travel by the vehicle as the boarding time (T21). Then, the learning program generated in accordance with the learning time is executed when the vehicle starts moving, and after the learning program ends, the learning program is generated so as to end within the spare time according to the driver's review request. A review program is executed.
  • the portion corresponding to the manual operation section (that is, the section that is not the automatic operation enabled section) is set to a learning difficulty level low by configuring mainly the learning elements that have already been executed.
  • the portion corresponding to the drivable section is configured with a learning difficulty level by mainly configuring an unexecuted learning element.
  • the automatic driving possible section is estimated based on the movement schedule, the subject of the learning element constituting the learning program is changed between the manual driving section and the automatic driving enabled section, and the manual operation section is changed.
  • the difficulty of learning in the section where automatic driving is possible is increased.
  • the driver learns refreshingly while focusing attention on driving in the manual driving section, and learns efficiently by actively incorporating new content in the automatic driving section where the driving burden is reduced. It is possible to proceed.
  • D-2 D-2.
  • Second modification In the first modification described above, the difficulty level of voice learning is different between the manual operation section and the automatic operation possible section. However, the speech learning may be executed intensively in the section where automatic driving is possible.
  • the speech learning system 10 of the second modified example includes an automatic driving possible section estimating unit 23 as in the first modified example (see FIG. 6) described above.
  • the automatic driving possible section estimation unit 23 estimates an automatic driving capable section satisfying a predetermined condition from the departure point to the destination based on the movement schedule acquired by the movement schedule acquisition unit 11.
  • the predetermined condition for enabling automatic operation is the same as in the first modification.
  • the boarding time estimation part 12 of a 2nd modification will estimate the time required to move the automatic driving
  • the learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and generates a learning program in accordance with the learning time.
  • FIG. 9 shows a flowchart of the speech learning control process executed by the speech learning system 10 of the second modified example.
  • the voice learning control process (S200) of the second modified example is started, first, it is determined whether or not a moving schedule of the vehicle has been acquired (S201). If the travel schedule has not been acquired (S201: NO), it is subsequently determined by referring to the travel history of the vehicle whether the vehicle is waiting at a specific location (S202).
  • the waiting time based on the behavior prediction from the movement history is estimated as the boarding time (S203).
  • the required time for moving the automatic driving section (for example, driving time on the highway) is set as the boarding time. Estimate (S206).
  • a learning time shorter than the boarding time by a predetermined margin time is set (S207), and a learning program for one time that ends in accordance with the learning time is stored in a plurality of stored learning elements.
  • a combination is generated from the inside (S208).
  • a review program that ends within the spare time is generated by combining the learning elements included in the generated learning program (S209).
  • a learning program is generated in the same way for learning performed while waiting at a specific place and learning performed during an automatic driving enabled section, but the degree of difficulty is different.
  • a learning program may be generated.
  • the review program is started (S215). Subsequently, it is determined whether or not the review program has ended (S216). If the review program has not ended yet (S216: NO), the process stands by. And when all the review programs are complete
  • FIG. 10 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the second modified example.
  • the horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
  • the travel time (T31) is set as the travel time (T31), and a learning time (T32) shorter than the travel time by a predetermined margin time (T33) is set.
  • the learning program generated in accordance with the learning time is executed when the vehicle moves and enters the automatic driving enabled section.
  • the review program generated to finish within the spare time is executed in response to the driver's review request.
  • the automatic driving possible section is estimated based on the moving schedule, and the voice learning is intensively executed in the automatic driving possible section while the vehicle is moving.
  • various operations of the vehicle are performed automatically, which greatly reduces the burden on the driver and allows the driver to afford, so during automatic driving Especially suitable for performing speech learning. Therefore, the driver can advance the voice learning safely and effectively by executing the voice learning intensively in the automatic driving enabled section.
  • the driving load is estimated in two stages of high and low, but the driving load is estimated in multiple stages (for example, four stages of loads 1 to 4). Also good.
  • the estimated driving load is higher, a learning program having a lower difficulty level (a smaller number of unexecuted learning elements) may be generated.
  • the time (riding time) in which the user is in the vehicle is estimated, but refer to the movement history of the vehicle. Then, the time during which the user is away from the vehicle between the movements may be estimated. For example, if it is customary for a mother who has driven a vehicle to deliver a child to a cram school and return to the school once it is customary to return to the cram school, he / she should return home based on behavior predictions from the movement history. Estimated time (outgoing time).
  • a learning program having a learning time shorter than the hollow time is generated and can be learned by the mobile terminal owned by the mother, continuous learning using the hollow time can be promoted. Further, the present disclosure is not limited to the above-described speech learning system, and may be provided as a speech learning method.

Landscapes

  • Business, Economics & Management (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

This speech learning system (10) adapted for use in a vehicle provides a user in the vehicle with learning content through speech. The speech learning system (10) comprises: a learning element storage unit (14) which stores a plurality of learning elements constituting learning content; a travel time estimation unit (12) which estimates a travel time representing the amount of time the user travels in the vehicle; a learning program generation unit (15) which combines multiple learning elements out of the plurality of learning elements to generate a one-time learning program that will end within the travel time estimated by the travel time estimation unit; and an execution unit (17) which executes the learning program. This speech learning system makes it possible to encourage continuous study in a vehicle.

Description

音声学習システムおよび音声学習方法Speech learning system and speech learning method 関連出願の相互参照Cross-reference of related applications
 本出願は、2015年1月19日に出願された日本特許出願番号2015-008175号と、2015年7月31日に出願された日本特許出願番号2015-152126号に基づくもので、ここにその記載内容を援用する。 This application is based on Japanese Patent Application No. 2015-008175 filed on January 19, 2015 and Japanese Patent Application No. 2015-152126 filed on July 31, 2015. The description is incorporated.
 本開示は、車両内の利用者が音声により学習するための音声学習システムおよび音声学習方法に関する。 The present disclosure relates to a speech learning system and a speech learning method for a user in a vehicle to learn by speech.
 学習コンテンツを音声によって利用者に供することで、テキスト等を見ることが困難な状況でも学習を可能とする音声学習技術が知られている。この音声学習技術を用いれば、車両を運転中などの時間を利用して学習することが可能である。 音 声 Voice learning technology is known that enables learning even when it is difficult to see text, etc., by providing learning content to users by voice. If this speech learning technique is used, it is possible to learn using time such as driving the vehicle.
 こうした音声の学習コンテンツは、予め定められた1回分の学習時間(例えば1時間)のプログラムで提供されることが一般的である。また、学習する利用者が自らの学習時間を任意に設定可能とする技術が提案されている(特許文献1)。 Such audio learning content is generally provided by a program having a predetermined learning time (for example, 1 hour). In addition, a technique has been proposed that enables a learning user to arbitrarily set his / her learning time (Patent Document 1).
特開2013-109308号公報JP 2013-109308 A
 従来の音声学習技術を車両に適用しても、利用者(例えば運転者)が車両内での学習をなかなか継続することができないという問題があった。これは次のような理由による。まず、上述のように予め1回分の学習時間が定められている場合は、例えば、その学習時間よりも車両での移動の所要時間が短いと、運転者が学習に必要な時間を確保できないことを理由に、今回の運転では学習の開始を諦めてしまうことがある。また、特許文献1のように学習時間を自ら設定する場合は、運転者の意思決定を必要とするため、学習の開始に躊躇してしまうことがある。こうして先延ばしにされることで学習が途切れがちになってしまう。 Even if the conventional speech learning technology is applied to a vehicle, there is a problem that a user (for example, a driver) cannot easily continue learning in the vehicle. This is due to the following reason. First, if the learning time for one time is determined in advance as described above, for example, if the time required for movement in the vehicle is shorter than the learning time, the driver cannot secure the time required for learning. For this reason, this driving may give up on the start of learning. In addition, when the learning time is set by itself as in Patent Document 1, since the driver needs to make a decision, he / she may hesitate to start learning. In this way, learning is likely to be interrupted.
 本開示は、従来技術が有する上述した課題に鑑みてなされたものであり、車両内での継続的な学習を促進することが可能な音声学習システムおよび音声学習方法の提供を目的とする。 The present disclosure has been made in view of the above-described problems of the related art, and an object thereof is to provide a speech learning system and a speech learning method that can promote continuous learning in a vehicle.
 本開示の一態様による音声学習システムは、車両に適用されて、該車両内の利用者に対して学習コンテンツを音声によって供するシステムであり、学習コンテンツを構成する複数の学習要素を記憶する学習要素記憶部と、利用者が車両に乗車している乗車時間を推定する乗車時間推定部と、乗車時間推定部で推定された乗車時間内で終了する1回分の学習プログラムを、複数の学習要素の中から組み合わせて生成する学習プログラム生成部と、学習プログラムを実行する実行部とを備える。 A speech learning system according to one aspect of the present disclosure is a system that is applied to a vehicle and provides learning content to a user in the vehicle by speech, and stores a plurality of learning elements constituting the learning content A storage unit, a boarding time estimation unit that estimates a boarding time in which the user has been on the vehicle, and a learning program that ends within the boarding time estimated by the boarding time estimation unit include a plurality of learning elements. A learning program generation unit that generates a combination from the inside and an execution unit that executes the learning program are provided.
 このような本開示の音声学習システムによれば、乗車時間内に学習が終了することが保証されるので、学習に必要な時間を確保できないからと先延ばしにさせず、利用者に学習の開始を促すことができる。また、乗車時間に応じて学習時間が設定されるので、運転者の意思決定が不要である。そして、乗車時間を利用して1回分の学習が確実に終了することで達成感があり、次回の学習のモチベーションが高まるので、車両内での継続的な学習を促進することが可能となる。 According to such a speech learning system of the present disclosure, since learning is guaranteed to be completed within the boarding time, the user can start learning without delaying the time required for learning. Can be encouraged. In addition, since the learning time is set according to the boarding time, it is not necessary for the driver to make a decision. And since there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
 本開示の他の態様による音声学習方法は、車両に適用され、該車両内の利用者に対して学習コンテンツを音声によって供する方法であって、利用者が車両に乗車している乗車時間を推定し、学習コンテンツを構成する予め記憶された複数の学習要素の中から組み合わせて、乗車時間内で終了する1回分の学習プログラムを生成し、学習プログラムを実行することを含む。 A speech learning method according to another aspect of the present disclosure is a method that is applied to a vehicle and provides learning content by voice to a user in the vehicle, and estimates a boarding time for which the user has been on the vehicle. And generating a learning program for one time that ends within the boarding time by combining from a plurality of pre-stored learning elements constituting the learning content, and executing the learning program.
 このような音声学習方法によっても、車両内の利用者に対して、車両内での継続的な学習を促進することが可能である。 Even with such a speech learning method, it is possible to promote continuous learning in the vehicle for users in the vehicle.
 本開示についての上記目的およびその他の目的、特徴や利点は、添付の図面を参照しながら下記の詳細な記述により、より明確になる。その図面は、
図1は、本開示の一実施例の音声学習システムの構成を示す説明図であり、 図2は、本開示の一実施例の音声学習システムで実行される音声学習制御処理を示すフローチャートであり、 図3は、本開示の一実施例の学習プログラム生成処理を示すフローチャートであり、 図4は、本開示の一実施例の学習プログラム実行処理を示すフローチャートであり、 図5Aは、本開示の一実施例の音声学習制御処理に従って車両内の運転者が語学学習する例を模式的に示した説明図であり、 図5Bは、図5Aに示す語学学習の例において、学習目標と各ステップの学習内容を示す図であり、 図6は、本開示の第1変形例の音声学習システムの構成を示す説明図であり、 図7は、本開示の第1変形例の音声学習システムで実行される学習プログラム生成処理を示すフローチャートであり、 図8は、本開示の第1変形例の音声学習システムによって車両内の運転者が学習する例を模式的に示した説明図であり、 図9は、本開示の第2変形例の音声学習システムで実行される音声学習制御処理を示すフローチャートであり、 図10は、本開示の第2変形例の音声学習システムによって車両内の運転者が学習する例を模式的に示した説明図である。
The above and other objects, features, and advantages of the present disclosure will become more apparent from the following detailed description with reference to the accompanying drawings. The drawing
FIG. 1 is an explanatory diagram illustrating a configuration of a speech learning system according to an embodiment of the present disclosure. FIG. 2 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the embodiment of the present disclosure. FIG. 3 is a flowchart illustrating a learning program generation process according to an embodiment of the present disclosure. FIG. 4 is a flowchart illustrating a learning program execution process according to an embodiment of the present disclosure. FIG. 5A is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns a language according to the speech learning control process of an embodiment of the present disclosure; FIG. 5B is a diagram showing learning objectives and learning contents of each step in the example of language learning shown in FIG. 5A. FIG. 6 is an explanatory diagram illustrating a configuration of the speech learning system according to the first modified example of the present disclosure. FIG. 7 is a flowchart illustrating a learning program generation process executed by the speech learning system according to the first modified example of the present disclosure. FIG. 8 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the first modification of the present disclosure. FIG. 9 is a flowchart illustrating a speech learning control process executed by the speech learning system according to the second modified example of the present disclosure. FIG. 10 is an explanatory diagram schematically illustrating an example in which a driver in a vehicle learns using the speech learning system according to the second modified example of the present disclosure.
 以下では、上述した本開示の内容を明確にするために実施例について説明する。 Hereinafter, examples will be described in order to clarify the contents of the present disclosure described above.
 A.装置構成:
 図1には、本実施例の音声学習システム10の構成が示されている。本実施例の音声学習システム10は、車両に搭載されており、車両内の利用者(例えば運転者)に対して学習コンテンツを音声で提供する。図示されるように音声学習システム10は、移動予定取得部11と、乗車時間推定部12と、移動履歴記憶部13と、学習要素記憶部14と、学習プログラム生成部15と、通知部16と、実行部17と、学習履歴記憶部18と、負荷情報取得部19と、運転負荷推定部20などを備えている。
A. Device configuration:
FIG. 1 shows the configuration of a speech learning system 10 of the present embodiment. The voice learning system 10 according to the present embodiment is mounted on a vehicle and provides learning content by voice to a user (for example, a driver) in the vehicle. As shown in the figure, the speech learning system 10 includes a movement schedule acquisition unit 11, a boarding time estimation unit 12, a movement history storage unit 13, a learning element storage unit 14, a learning program generation unit 15, and a notification unit 16. , An execution unit 17, a learning history storage unit 18, a load information acquisition unit 19, a driving load estimation unit 20, and the like.
 尚、これら10個の「部」11~20は、音声学習システム10を機能に着目して概念的に分類したものであり、それぞれが必ずしも物理的に独立して存在している必要はない。これらは、各種の機器や、電子部品、集積回路、コンピューター、コンピュータープログラム、あるいはそれらの組合せなどによって構成することができる。 Note that these ten “parts” 11 to 20 are conceptually classified by focusing on the function of the speech learning system 10, and each does not necessarily have to be physically independent. These can be configured by various devices, electronic components, integrated circuits, computers, computer programs, or combinations thereof.
 移動予定取得部11は、移動を開始する車両の出発地や目的地などを取得する。例えば、車両に搭載された図示しないナビゲーションシステムに目的地が設定されると、移動予定取得部11は、設定された目的地と、出発地としての車両の現在地とを取得する。 The moving schedule acquisition unit 11 acquires the starting point and destination of the vehicle that starts moving. For example, when a destination is set in a navigation system (not shown) mounted on the vehicle, the movement schedule acquisition unit 11 acquires the set destination and the current location of the vehicle as the departure point.
 乗車時間推定部12は、移動予定取得部11によって取得された移動予定に基づいて、出発地から目的地までの移動に要する時間を乗車時間(すなわち車両内に拘束される時間)として推定する。 The boarding time estimation unit 12 estimates the time required to move from the departure place to the destination as the boarding time (that is, the time restrained in the vehicle) based on the travel schedule acquired by the travel schedule acquisition unit 11.
 移動履歴記憶部13は、過去に車両が移動した履歴を日付や曜日や時間などと共に記憶している。 The movement history storage unit 13 stores a history of movement of the vehicle in the past along with the date, day of the week, time, and the like.
 乗車時間推定部12は、移動履歴記憶部13に履歴として記憶されている運転者の習慣的な行動に基づいて、特定の場所に移動した車両内で運転者が待機している時間を乗車時間として推定する。 The boarding time estimator 12 is based on the driver's habitual behavior stored as a history in the movement history storage unit 13, and determines the boarding time as the driver is waiting in the vehicle that has moved to a specific location. Estimate as
 学習要素記憶部14は、音声の学習コンテンツを構成する複数の学習要素を記憶している。本実施例の音声学習システム10は、語学を学習する(例えば、日本語を母国語とする利用者が英語を学習する)コンテンツを供するものであり、学習要素記憶部14には、語学学習用のコンテンツを構成する学習要素として、単語や短いフレーズが予め多数記憶されている。 The learning element storage unit 14 stores a plurality of learning elements constituting speech learning content. The speech learning system 10 according to the present embodiment provides content for learning languages (for example, a user whose native language is Japanese learns English), and the learning element storage unit 14 stores language learning language. A large number of words and short phrases are stored in advance as learning elements constituting the content.
 学習プログラム生成部15は、乗車時間推定部12で推定された乗車時間よりも短い学習時間を設定し、その学習時間に合わせて終了する1回分の学習プログラムを学習要素記憶部14の複数の学習要素の中から組み合わせて生成する。この学習プログラムは、合計学習時間が乗車時間より短い複数の学習要素を含み、学習対象セット又は学習対象コースとも称する。 The learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and stores a learning program for one time that ends in accordance with the learning time. Generate by combining elements. This learning program includes a plurality of learning elements whose total learning time is shorter than the boarding time, and is also referred to as a learning target set or a learning target course.
 通知部16は、乗車時間推定部12で推定された乗車時間と、学習プログラム生成部15で生成された学習プログラムの学習時間とを車両内の利用者に対して通知する。 The notification unit 16 notifies the user in the vehicle of the boarding time estimated by the boarding time estimation unit 12 and the learning time of the learning program generated by the learning program generation unit 15.
 実行部17は、利用者が操作可能な操作スイッチ21と接続されており、操作スイッチ21で開始要求操作が行われると、学習プログラム生成部15で生成された学習プログラムを実行してスピーカー22から音声を出力させる。 The execution unit 17 is connected to an operation switch 21 that can be operated by the user. When a start request operation is performed by the operation switch 21, the execution program 17 executes the learning program generated by the learning program generation unit 15 and executes the learning program from the speaker 22. Output audio.
 学習履歴記憶部18は、実行部17によって実行された学習プログラム(すなわち実行済みの学習要素)を記憶する。 The learning history storage unit 18 stores the learning program executed by the execution unit 17 (that is, the executed learning element).
 負荷情報取得部19は、車両の運転者の運転負荷を推定するための情報(以下、負荷情報)を取得する。 The load information acquisition unit 19 acquires information (hereinafter referred to as load information) for estimating the driving load of the vehicle driver.
 運転負荷推定部20は、取得された負荷情報に基づいて運転者の運転負荷を推定する。 The driving load estimation unit 20 estimates the driving load of the driver based on the acquired load information.
 例えば、負荷情報として、移動履歴記憶部13の移動履歴を取得し、今回の移動が通勤などで運転者の慣れた移動であれば、運転負荷が所定負荷より低いと推定するのに対して、履歴にない運転者の不慣れな移動であれば、運転負荷が所定負荷より高いと推定する。また、負荷情報として、出発地から目的地までの移動ルートの地図情報を取得し、交通量の多い区間やカーブが続く区間などの運転に注意を要する区間であれば、運転負荷が所定負荷より高いと推定する。 For example, as the load information, the movement history of the movement history storage unit 13 is acquired, and if the current movement is a movement familiar to the driver such as commuting, the driving load is estimated to be lower than a predetermined load. If the driver is not familiar with the movement, the driving load is estimated to be higher than the predetermined load. As load information, map information of the travel route from the departure point to the destination is acquired, and if it is a section that requires attention such as a section with a lot of traffic or a section with a curve, the driving load is more than the predetermined load. Estimated high.
 そして、学習する利用者が運転者である場合、学習プログラム生成部15は、運転負荷推定部20で推定された運転負荷に応じて難易度の異なる学習プログラムを生成する。 And when the user who learns is a driver, the learning program production | generation part 15 produces | generates the learning program from which a difficulty differs according to the driving load estimated by the driving load estimation part 20. FIG.
 また、運転負荷推定部20は、車両の移動中にリアルタイムの運転負荷を推定する。 Further, the driving load estimation unit 20 estimates a real-time driving load while the vehicle is moving.
 例えば、移動中の負荷情報として、車両の周囲を監視するカメラやセンサーから障害物の接近などの警報を伴う情報を取得したことに基づいて、運転負荷が所定負荷より高いと推定する。また、アクセルやブレーキやハンドルなどの操作部から急制動あるいは急ハンドルを示す情報を取得したことに基づいて、運転負荷が所定負荷より高いと推定する。さらに、運転者を監視するカメラから取得した運転者の視線の移動量に基づいて、視線の移動量が減少すると運転者に余裕がないものとして運転負荷が所定負荷より高いと推定してもよい。 For example, it is estimated that the driving load is higher than a predetermined load based on the fact that information accompanied by an alarm such as the approach of an obstacle is acquired from a camera or sensor that monitors the surroundings of the vehicle as load information during movement. In addition, it is estimated that the driving load is higher than the predetermined load based on the fact that the information indicating the sudden braking or the sudden handle is acquired from the operation unit such as the accelerator, the brake, or the handle. Further, based on the movement amount of the driver's line of sight acquired from the camera monitoring the driver, it may be estimated that the driving load is higher than the predetermined load on the assumption that the driver has no margin when the movement amount of the line of sight decreases. .
 そして、学習する利用者が運転者である場合、実行部17は、車両の移動中に運転者の運転負荷が所定負荷より高いと推定されると、学習プログラムの実行を中断するようになっている。 And when the user who learns is a driver | operator, the execution part 17 will come to interrupt execution of a learning program, if it is estimated that a driver | operator's driving load is higher than predetermined load during the movement of a vehicle. Yes.
 B.音声学習制御処理:
 図2には、本実施例の音声学習システム10で実行される音声学習制御処理のフローチャートが示されている。
B. Speech learning control processing:
FIG. 2 shows a flowchart of the speech learning control process executed by the speech learning system 10 of this embodiment.
 この音声学習制御処理(S100)は、利用者が音声学習システム10を起動すると開始される。尚、車両のエンジン始動と同期して開始されるようにしてもよい。音声学習制御処理(S100)を開始すると、まず、車両の移動予定として出発地や目的地を取得したか否かを判断する(S101)。ナビゲーションシステムに目的地が設定されたのに伴い、目的地および出発地(例えば現在地)を取得した場合は(S101:YES)、出発地から目的地までの移動の所要時間を乗車時間として推定する(S102)。 The speech learning control process (S100) is started when the user activates the speech learning system 10. It may be started in synchronism with the start of the vehicle engine. When the voice learning control process (S100) is started, first, it is determined whether or not a departure place and a destination are acquired as a moving schedule of the vehicle (S101). When the destination and the departure place (for example, the current location) are acquired as the destination is set in the navigation system (S101: YES), the time required to travel from the departure place to the destination is estimated as the boarding time. (S102).
 これに対して、移動予定を取得していない場合は(S101:NO)、車両の移動履歴を参照して、特定の場所で待機中であるか否かを判断する(S103)。車両の移動履歴を日付や曜日や時間と共に記憶しておくことによって、運転者の習慣的な行動を予測することが可能である。例えば、車両を運転して子供を塾まで送り届けた母親が、塾の駐車場で終わるまで待っていることが習慣になっていれば、特定の場所として塾の駐車場を登録する。そして、塾の駐車場で待機中であると判断した場合は(S103:YES)、移動履歴からの行動予測に基づいて塾の駐車場での待機時間を乗車時間として推定する(S104)。 On the other hand, if the movement schedule has not been acquired (S101: NO), it is determined whether or not the vehicle is on standby at a specific place with reference to the movement history of the vehicle (S103). By storing the movement history of the vehicle together with the date, day of the week, and time, it is possible to predict the habitual behavior of the driver. For example, if it is customary for a mother who drives a vehicle to deliver a child to a cram school and waits until it finishes at a cram school parking lot, the cram school parking lot is registered as a specific place. And when it is judged that it is waiting in the parking lot of a cram school (S103: YES), the waiting time in a parking lot of a cram school is estimated as boarding time based on the action prediction from a movement history (S104).
 一方、特定の場所で待機中ではない場合は(S103:NO)、音声学習制御処理(S100)の先頭に戻り、車両の移動予定を取得したか否かを再び判断する(S101)。 On the other hand, if the vehicle is not waiting in a specific place (S103: NO), the process returns to the head of the voice learning control process (S100), and it is determined again whether or not the moving schedule of the vehicle has been acquired (S101).
 こうした処理を繰り返すうちに、移動の所要時間あるいは特定の場所での待機時間を乗車時間として推定すると、その推定した乗車時間よりも短い学習時間を設定する(S105)。本実施例では、推定した乗車時間よりも所定の余裕時間(例えば10分)だけ短い学習時間を設定するようになっており、例えば、乗車時間を50分と推定した場合、余裕時間の10分を差し引いて、学習時間を40分に設定する。尚、余裕時間を設ける理由については後述する。 If the time required for movement or the waiting time at a specific place is estimated as the boarding time while repeating these processes, a learning time shorter than the estimated boarding time is set (S105). In this embodiment, a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set. For example, when the boarding time is estimated to be 50 minutes, the margin time is 10 minutes. Is subtracted and the learning time is set to 40 minutes. The reason for providing the extra time will be described later.
 学習時間を設定すると、続いて、その学習時間に合わせて学習プログラムを生成する処理(以下、学習プログラム生成処理)を開始する(S106)。前述したように本実施例では、語学学習用のコンテンツを構成する学習要素として、単語や短いフレーズを予め多数記憶しており、その中から組み合わせて学習プログラムを生成する。その際、選択する学習要素の数や、個々の再生時間や、再生のリピート回数などによって学習時間を調整することができる。 When the learning time is set, a process for generating a learning program in accordance with the learning time (hereinafter referred to as learning program generation process) is started (S106). As described above, in this embodiment, a large number of words and short phrases are stored in advance as learning elements constituting the content for language learning, and a learning program is generated by combining them. At this time, the learning time can be adjusted according to the number of learning elements to be selected, individual reproduction times, the number of reproduction repeats, and the like.
 図3には、本実施例の学習プログラム生成処理のフローチャートが示されている。図示されるように学習プログラム生成処理(S106)では、まず、車両の移動中に行う学習であるか否かを判断する(S121)。前述したS101の処理で移動予定を取得しており、移動中に行う学習であると判断した場合は(S121:YES)、今回の移動に関する負荷情報を取得し(S122)、その負荷情報に基づいて推定した運転負荷が所定負荷より高いか否かを判断する(S123)。本実施例では、運転負荷を高低の2段階に分けて推定するようになっている。例えば、負荷情報として取得した移動履歴を参照して、今回の移動が履歴にない運転者の不慣れな移動であり、運転負荷が所定負荷より高いと推定した場合は(S123:YES)、記憶しておいた学習履歴に基づいて、既に実行済みの学習要素(ここでは単語やフレーズ)を主体とした学習プログラムを生成する(S124)。 FIG. 3 shows a flowchart of the learning program generation process of this embodiment. As shown in the figure, in the learning program generation process (S106), it is first determined whether or not the learning is performed while the vehicle is moving (S121). If the movement schedule is acquired in the process of S101 described above and it is determined that the learning is performed during movement (S121: YES), load information relating to the current movement is acquired (S122), and based on the load information. It is determined whether the estimated driving load is higher than a predetermined load (S123). In this embodiment, the driving load is estimated in two stages of high and low. For example, referring to the movement history acquired as the load information, if it is estimated that the current movement is an unfamiliar movement of the driver who is not in the history and the driving load is higher than a predetermined load (S123: YES), it is stored. Based on the stored learning history, a learning program mainly composed of already-executed learning elements (here, words and phrases) is generated (S124).
 運転負荷が高い状況では、運転者の注意が運転に向けられることで、学習に対する意識(すなわち理解力)が低下する傾向にある。そこで、未実行の新たな学習要素ではなく、既に実行済みの学習要素を主体に学習プログラムを構成して学習の難易度を下げることで、運転者は運転に注意を向けながら復習的に学習を継続することが可能となる。 In situations where the driving load is high, the driver's attention is directed to driving, which tends to reduce learning awareness (ie, understanding). Therefore, instead of new learning elements that have not been executed, the learning program is composed mainly of learning elements that have already been executed to reduce the difficulty of learning, so that the driver can review learning while paying attention to driving. It is possible to continue.
 尚、負荷情報として取得した移動ルートの地図情報に基づいて運転負荷を推定する場合には、運転負荷が高いと推定される区間で再生される部分を、主に実行済みの学習要素で構成するようにしてもよい。また、学習の難易度を下げる態様としては、実行済みの学習要素を主体とすることに限られず、再生のリピート回数を増やすこととしてもよい。 In the case where the driving load is estimated based on the map information of the travel route acquired as the load information, the portion reproduced in the section in which the driving load is estimated to be high is mainly composed of the already executed learning elements. You may do it. In addition, the manner of reducing the difficulty level of learning is not limited to using the learning elements that have already been executed as the subject, but the number of reproduction repeats may be increased.
 これに対して、例えば、今回の移動が通勤などで運転者の慣れた移動であり、運転負荷が低いと推定した場合は(S123:NO)、記憶しておいた学習履歴に基づいて、未実行の学習要素(ここでは単語やフレーズ)を主体とした学習プログラムを生成する(S125)。 On the other hand, for example, when it is estimated that the current movement is a movement familiar to the driver due to commuting and the driving load is low (S123: NO), based on the stored learning history, A learning program mainly composed of learning elements for execution (here, words and phrases) is generated (S125).
 運転負荷が低い状況では、運転者に余裕があり、学習に意識を向けることで理解力が高まる傾向にある。そこで、未実行の学習要素を主体に学習プログラムを構成することで、積極的に新たな内容を取り入れて学習を進行させることができる。 In situations where the driving load is low, there is a margin for the driver, and understanding tends to increase by focusing on learning. Therefore, by constructing a learning program based on unexecuted learning elements, it is possible to actively incorporate new contents and advance learning.
 尚、移動ルートの地図情報に基づき、運転負荷が低いと推定される区間で再生される部分を、主に未実行の学習要素で構成するようにしてもよい。 Note that, based on the map information of the travel route, the portion that is reproduced in the section in which the driving load is estimated to be low may be configured mainly by unexecuted learning elements.
 一方、S121の処理において、移動中に行う学習ではない、すなわち、特定の場所で待機中に行う学習であると判断した場合は(S121:NO)、運転に注意を向ける必要がなく、学習に集中できる状況にあると言えるので、未実行の学習要素を主体とした学習プログラムを生成する(S125)。 On the other hand, in the process of S121, when it is determined that the learning is not performed while moving, that is, the learning is performed while waiting at a specific place (S121: NO), it is not necessary to pay attention to driving and learning is performed. Since it can be said that the user can concentrate, a learning program mainly including unexecuted learning elements is generated (S125).
 以上のように学習の環境に応じて学習プログラムを生成したら、続いて、復習プログラムを生成する(S126)。前述したように本実施例では、推定した乗車時間よりも所定の余裕時間(例えば10分)だけ短い学習時間を設定して、学習時間に合わせた学習プログラムの終了後に、余裕時間を確保しておくようになっている。そして、この余裕時間を利用して今回の学習をおさらいする復習プログラムを実行することが可能となっている。 When the learning program is generated according to the learning environment as described above, the review program is generated (S126). As described above, in this embodiment, a learning time shorter than the estimated boarding time by a predetermined margin time (for example, 10 minutes) is set, and after the learning program matching the learning time is completed, the margin time is secured. It is supposed to leave. Then, it is possible to execute a review program that reviews this learning using this spare time.
 復習プログラムは、S124またはS125で生成した学習プログラムに含まれる学習要素の中から組み合わせて、余裕時間内で終了するように生成する。 The review program is generated by combining the learning elements included in the learning program generated in S124 or S125 so as to end within the spare time.
 このような復習プログラムを用意することで、学習プログラムに従って一通り終了した学習をおさらいしたい利用者の意欲に応えることができ、学習の達成感をより高めることが可能となる。 By preparing such a review program, it is possible to respond to the user's willingness to review the learning that has been completed according to the learning program, and to further enhance the sense of achievement of learning.
 こうして復習プログラムを生成すると、図3の学習プログラム生成処理を終了して、図2の音声学習制御処理に復帰する。 When the review program is generated in this way, the learning program generation process in FIG. 3 is terminated and the process returns to the speech learning control process in FIG.
 音声学習制御処理では、学習プログラム生成処理(S106)から復帰すると、S102またはS104で推定した乗車時間と、S105で設定した学習時間とを車両内の利用者に対して通知する(S107)。 In the voice learning control process, when returning from the learning program generation process (S106), the boarding time estimated in S102 or S104 and the learning time set in S105 are notified to the user in the vehicle (S107).
 この通知は、音声によって行ってもよいし、図示しない表示部に表示することで行ってもよい。 This notification may be performed by voice or by displaying on a display unit (not shown).
 このように学習時間を乗車時間と対比することで、乗車時間内に学習が終了することを強調することができる。これにより、学習に必要な時間を確保できないことを理由に利用者が学習をためらうことがなく、学習の開始を促すことができる。 It is possible to emphasize that the learning is completed within the boarding time by comparing the learning time with the boarding time. Thereby, the user does not hesitate to learn because the time required for learning cannot be secured, and the start of learning can be promoted.
 続いて、利用者による開始要求操作が所定時間内に操作スイッチ21で検出されたか否かを判断する(S108)。所定時間が経過しても開始要求操作が検出されない場合は(S108:NO)、今回は利用者に学習を開始する意思がないものと判断して、音声学習制御処理(S100)の先頭に戻り、上述した一連の処理を繰り返す。 Subsequently, it is determined whether a start request operation by the user is detected by the operation switch 21 within a predetermined time (S108). If the start request operation is not detected even after the predetermined time has elapsed (S108: NO), it is determined that the user has no intention to start learning this time, and the process returns to the head of the voice learning control process (S100). The series of processes described above is repeated.
 一方、開始要求操作が検出された場合は(S108:YES)、学習プログラムを実行する処理(以下、学習プログラム実行処理)を開始する(S109)。 On the other hand, when a start request operation is detected (S108: YES), a process for executing a learning program (hereinafter referred to as a learning program execution process) is started (S109).
 尚、本実施例では、利用者の開始要求操作を受けて学習プログラム実行処理を開始することとしているが、車両が移動を開始すると、自動的に学習プログラム実行処理を開始するようにしてもよい。こうすれば、学習の開始に際して利用者の意思決定は不要となり、移動中は車両内に拘束される利用者に対して学習を促すことが可能となる。 In this embodiment, the learning program execution process is started in response to the user's start request operation. However, when the vehicle starts moving, the learning program execution process may be automatically started. . This eliminates the need for the user to make a decision at the start of learning, and can encourage the user who is restrained in the vehicle to learn while moving.
 図4には、本実施例の学習プログラム実行処理のフローチャートが示されている。学習プログラム実行処理(S109)では、まず、図3のS124またはS125で生成した学習プログラムを始動する(S131)。これにより、学習プログラムに従ってスピーカー22から音声が出力される。 FIG. 4 shows a flowchart of the learning program execution process of the present embodiment. In the learning program execution process (S109), first, the learning program generated in S124 or S125 of FIG. 3 is started (S131). Thereby, sound is output from the speaker 22 according to the learning program.
 続いて、車両が移動中であるか否かを判断する(S132)。そして、移動中である場合は(S132:YES)、リアルタイムの負荷情報を取得し(S133)、その負荷情報に基づいて推定した運転負荷が高いか否かを判断する(S134)。例えば、障害物の接近などの警報を伴う情報を取得し、運転負荷が高いと推定した場合は(S134:YES)、学習プログラムの実行を中断する(S135)。運転負荷が一時的に高い状況では、運転者が運転に注力しており学習する余裕がないことから、学習を一時停止させて安全確保を優先させる。 Subsequently, it is determined whether or not the vehicle is moving (S132). If the vehicle is moving (S132: YES), real-time load information is acquired (S133), and it is determined whether the driving load estimated based on the load information is high (S134). For example, when information with an alarm such as approach of an obstacle is acquired and it is estimated that the driving load is high (S134: YES), the execution of the learning program is interrupted (S135). In situations where the driving load is temporarily high, since the driver is focusing on driving and there is no room for learning, priority is given to ensuring safety by temporarily stopping learning.
 その後、再びリアルタイムの負荷情報を取得し(S136)、取得した負荷情報に基づいて推定した運転負荷が低いか否かを判断する(S137)。依然として運転負荷が高い状況が続いている場合は(S137:NO)、S136の処理に戻って、リアルタイムの運転負荷を推定しながら、運転負荷が低くなるまで待機する。 Thereafter, real-time load information is acquired again (S136), and it is determined whether the driving load estimated based on the acquired load information is low (S137). When the situation where the driving load is still high continues (S137: NO), the process returns to S136 and waits until the driving load becomes low while estimating the real-time driving load.
 そして、上記の例で、障害物の接近などの警報が解除され、運転負荷が低いと推定した場合は(S137:YES)、中断された学習要素(ここでは単語やフレーズ)の先頭に戻って学習プログラムの実行を再開する(S138)。 Then, in the above example, when an alarm such as the approach of an obstacle is canceled and it is estimated that the driving load is low (S137: YES), it returns to the head of the interrupted learning element (here, word or phrase). The execution of the learning program is resumed (S138).
 学習要素の途中で中断された場合、そのまま途中から再開しても意味を理解し難いので、区切りのよい学習要素の先頭に戻って再開することによって、中断後の学習の理解を容易にすることができる。 If it is interrupted in the middle of a learning element, it is difficult to understand the meaning even if it is resumed from the middle, so it is easier to understand learning after the interruption by returning to the beginning of the learning element with good separation and restarting. Can do.
 尚、学習プログラムの実行を中断した場合には、学習プログラムに含まれる学習要素の数を減らしたり、再生のリピート回数を減らしたりすることで、中断時間に応じて学習プログラムの内容を修正することが可能である。 When the execution of the learning program is interrupted, the contents of the learning program can be modified according to the interruption time by reducing the number of learning elements included in the learning program or reducing the number of playback repeats. Is possible.
 以上では、S132の処理で車両が移動中であると判断し(S132:YES)、且つ、S134の処理で運転負荷が所定負荷より高いと推定した場合(S134:YES)について説明した。これに対して、車両が移動中ではない場合は(S132:NO)、運転負荷を推定する必要がないので、S133~S138の処理を省略する。また、運転負荷が所定負荷より低いと推定した場合は(S134:NO)、学習プログラムを中断する必要がないので、S135~S138の処理を省略して、学習プログラムが終了したか否かを判断する(S139)。 In the above, a case has been described in which it is determined that the vehicle is moving in the process of S132 (S132: YES), and it is estimated that the driving load is higher than the predetermined load in the process of S134 (S134: YES). On the other hand, when the vehicle is not moving (S132: NO), it is not necessary to estimate the driving load, so the processing of S133 to S138 is omitted. If it is estimated that the driving load is lower than the predetermined load (S134: NO), it is not necessary to interrupt the learning program, so the processing of S135 to S138 is omitted and it is determined whether or not the learning program has ended. (S139).
 学習プログラムが未だ終了していない場合は(S139:NO)、S132の処理に戻って、以降の上述した一連の処理を実行する。 If the learning program has not ended yet (S139: NO), the process returns to S132, and the series of processes described above are executed.
 そして、処理を繰り返すうちに、学習プログラムが全て終了した場合は(S139:YES)、図4の学習プログラム実行処理を終了して、図2の音声学習制御処理に復帰する。 Then, if all the learning programs are completed while the processing is repeated (S139: YES), the learning program execution processing in FIG. 4 is ended, and the processing returns to the speech learning control processing in FIG.
 音声学習制御処理では、学習プログラム実行処理(S109)から復帰すると、利用者による復習要求操作が所定時間内に操作スイッチ21で検出されたか否かを判断する(S110)。前述したように本実施例では、学習プログラムが終了すると、残りの余裕時間で復習プログラムを実行することが可能となっている。そこで、復習要求操作が検出された場合は(S110:YES)、図3のS126で生成した復習プログラムを始動する(S111)。 In the voice learning control process, when returning from the learning program execution process (S109), it is determined whether or not a review request operation by the user is detected by the operation switch 21 within a predetermined time (S110). As described above, in this embodiment, when the learning program ends, the review program can be executed with the remaining margin time. Therefore, when a review request operation is detected (S110: YES), the review program generated in S126 of FIG. 3 is started (S111).
 続いて、復習プログラムが終了したか否かを判断し(S112)、復習プログラムが未だ終了していない場合は(S112:NO)、そのまま待機する。そして、復習プログラムが全て終了した場合は(S112:YES)、図2の音声学習制御処理を終了する。 Subsequently, it is determined whether or not the review program has ended (S112). If the review program has not ended yet (S112: NO), the process stands by. And when all the review programs are complete | finished (S112: YES), the audio | voice learning control process of FIG. 2 is complete | finished.
 一方、所定時間が経過しても復習要求操作が検出されない場合は(S110:NO)、今回は利用者に復習する意思がないものと判断し、S111~S112の処理を省略して、図2の音声学習制御処理を終了する。 On the other hand, if the review request operation is not detected even after the predetermined time has elapsed (S110: NO), it is determined that the user does not intend to review this time, and the processing of S111 to S112 is omitted, and FIG. The voice learning control process is terminated.
 C.音声学習の実行例:
 図5Aには、上述した本実施例の音声学習制御処理(S100)に従って車両内の運転者が語学学習する例が模式的に示されている。図中の横軸は時間の流れを示しており、右方向に進むものとする。
C. Example of speech learning:
FIG. 5A schematically shows an example in which the driver in the vehicle learns language according to the above-described voice learning control process (S100) of the present embodiment. The horizontal axis in the figure indicates the flow of time, and proceeds in the right direction.
 図5Aに示した例では、車両で移動する所要時間を乗車時間(T11)として、乗車時間よりも所定の余裕時間(T13)だけ短い学習時間(T12)が設定され、その学習時間に合わせて学習プログラムが生成される。また、図5Bに示すように、今回の語学学習では、洋楽の歌詞のサビ部分を覚えることを学習目標として、学習プログラムがステップ1~4の4段階で構成されている。そして、車両が移動を開始する際に学習プログラムが実行され、運転者はステップ1から順番に学習する。 In the example shown in FIG. 5A, the travel time (T11) is set as a travel time (T11), and a learning time (T12) shorter than the travel time by a predetermined margin time (T13) is set. A learning program is generated. Further, as shown in FIG. 5B, in this language learning, the learning program is composed of four stages of steps 1 to 4, with the learning target being to learn the climax part of the Western lyrics. A learning program is executed when the vehicle starts to move, and the driver learns in order from step 1.
 まず、ステップ1では、サビ部分から運転者の学習履歴に基づいて未学習の単語が幾つか抽出されて、それらの単語の発音や意味のレクチャーが行われる。そして、音声指導に従って運転者が単語の発音を繰り返し練習する。 First, in step 1, some unlearned words are extracted from the rust portion based on the driver's learning history, and the pronunciation and meaning of these words are lectured. Then, the driver repeats pronunciation of words according to voice guidance.
 車両の中は、電車の中や家の中に比べて周囲から隔離されたプライベート空間であり、車両内に運転者が一人であれば、他人に気兼ねすることなく大きな声を出せるので、発音の練習には最適な場所である。 The inside of the vehicle is a private space that is isolated from the surroundings compared to the inside of a train or house, and if there is only one driver in the vehicle, you can make a loud voice without worrying about others, It's a great place to practice.
 また、運転者による単語の発音が録音されており、続けて再生されるようになっている。自身の発音を聞いてチェックすることで、発音の学習効果を高めることができる。 Also, the pronunciation of the word by the driver is recorded and is played back continuously. By listening and checking your own pronunciation, you can enhance the pronunciation learning effect.
 次のステップ2では、ステップ1で学習した単語を含む短いフレーズの発音やその訳のレクチャーが行われる。運転者は短いフレーズの発音を繰り返し練習すると共に、録音された自身の発音を聞いてチェックする。 In the next step 2, a short phrase including the words learned in step 1 is pronounced and its translation is lectured. The driver repeatedly practiced the pronunciation of short phrases and listened to the recorded pronunciation.
 続いて、ステップ3では、ステップ2で学習した短いフレーズを繋ぎ合わせて徐々に長いフレーズにしていく。運転者は徐々に長くなるフレーズの発音の練習を繰り返し、録音された自身の発音を聞いてチェックする。 Subsequently, in Step 3, the short phrases learned in Step 2 are joined together to gradually make longer phrases. The driver repeats the practice of pronunciation of phrases that are gradually lengthened, and listens to the recorded pronunciation to check.
 最後のステップ4では、運転者がサビ部分の全体を伴奏に合わせて歌いながら練習し、録音された自身の発音を聞いてチェックする。 In the final step 4, the driver practice while singing the entire chorus part along with the accompaniment, and listen to the recorded pronunciation to check.
 こうして学習プログラムが終了しても、目的地に到着するまでには余裕時間が確保されており、学習への意欲が高まりもう少し続けたい運転者に対しては、余裕時間内で復習プログラムが実行される。 Even after the learning program ends, there is enough time to arrive at the destination. For drivers who are more motivated to learn and want to continue for a while, the review program is executed within the margin time. The
 この復習プログラムでは、学習プログラムのステップ4と同様に、伴奏に合わせてサビ部分を歌って練習するようになっており、運転者は今回の学習をおさらいして習得の度合を高めることができる。 In this review program, as in step 4 of the learning program, the singer part is sung and practiced along with the accompaniment, and the driver can review this study to increase the level of acquisition. .
 以上に説明したように本実施例の音声学習システム10では、利用者(例えば運転者)が車両内に拘束される乗車時間を推定し、その乗車時間よりも短い学習時間の学習プログラムを生成するようになっており、乗車時間内に学習が終了することを保証することで、利用者に学習の開始を促すことができる。また、乗車時間に応じて学習時間が設定されるので、運転者の意思決定が不要である。そして、乗車時間を利用して1回分の学習が確実に終了することで達成感があり、次回の学習のモチベーションが高まるので、車両内での継続的な学習を促進することが可能となる。 As described above, in the speech learning system 10 according to the present embodiment, a user (for example, a driver) estimates a boarding time in which the user is restrained in the vehicle, and generates a learning program having a learning time shorter than the boarding time. Thus, by assuring that the learning is completed within the boarding time, the user can be prompted to start the learning. In addition, since the learning time is set according to the boarding time, it is not necessary for the driver to make a decision. And since there is a sense of accomplishment and the motivation of the next learning is increased by using the boarding time to reliably complete one learning, it is possible to promote continuous learning in the vehicle.
 また、本実施例では、乗車時間よりも所定の余裕時間だけ短い学習時間を設定することで、学習プログラムの終了後に余裕時間を確保しておくようになっており、この余裕時間内で復習プログラムを実行することが可能となっている。学習プログラムによって学習への意欲が高まった利用者に対して復習プログラムを実行することで、学習の達成感をより高めることができる。これにより、次回の学習へのモチベーションを向上させることができ、車両内での学習を継続させることが可能となる。 In this embodiment, the learning time is set shorter than the boarding time by a predetermined margin time, so that the margin time is secured after the learning program is finished. Can be executed. By executing the review program for users who have increased willingness to learn through the learning program, the sense of achievement of learning can be further enhanced. Thereby, motivation for the next learning can be improved, and learning in the vehicle can be continued.
 D.変形例:
 近年、車両の移動中に所定条件を満たすと、車両の自動運転を可能とする自動運転機能を備えた車両が開発されている。
D. Variations:
In recent years, vehicles having an automatic driving function that enables automatic driving of a vehicle when a predetermined condition is satisfied while the vehicle is moving have been developed.
 自動運転を実現する技術としては、例えば、レーダーなどで車両の前方を監視して、先行車両が存在しなければ設定の速度に調整し、先行車両が存在すれば先行車両に対して設定の車間距離を維持する技術(いわゆるアダプティブ・クルーズ・コントロール:ACC)や、カメラで撮影した前方画像などに基づいて車線を認識し、車線に沿って走行するようにステアリングを制御する技術(いわゆるレーンキープアシスト)などが知られている。これらによれば、アクセルやブレーキやハンドルの操作が自動で行われることで、運転者の負担を大幅に軽減することができる。 As a technology for realizing automatic driving, for example, the front of the vehicle is monitored by a radar or the like, and if there is no preceding vehicle, the speed is adjusted to the set speed. Technology that maintains distance (so-called adaptive cruise control: ACC) and technology that recognizes the lane based on the front image taken by the camera and controls the steering so that it travels along the lane (so-called lane keep assist) ) Etc. are known. According to these, the driver's burden can be greatly reduced by automatically operating the accelerator, the brake, and the steering wheel.
 以下では、自動運転機能を備えた車両に搭載される変形例の音声学習システム10について、上述の実施例とは異なる点を中心に説明する。尚、変形例の説明では、上述の実施例と同様の構成については同じ符号を付して説明を省略する。 Hereinafter, the modified speech learning system 10 mounted on a vehicle having an automatic driving function will be described with a focus on differences from the above-described embodiment. In the description of the modified example, the same components as those in the above-described embodiment are denoted by the same reference numerals and the description thereof is omitted.
 D-1.第1変形例:
 図6には、第1変形例の音声学習システム10の構成が示されている。第1変形例の音声学習システム10は、前述した実施例の音声学習システム10の負荷情報取得部19や運転負荷推定部20に代えて、自動運転可能区間推定部23を備えている。自動運転可能区間推定部23も、音声学習システム10を機能に着目して概念的に分類したものであり、それが必ずしも物理的に独立して存在している必要はない。自動運転可能区間推定部は、各種の機器や、電子部品、集積回路、コンピューター、コンピュータープログラム、あるいはそれらの組合せなどによって構成することができる。
D-1. First modification:
FIG. 6 shows the configuration of the speech learning system 10 of the first modification. The voice learning system 10 of the first modification includes an automatic driving possible section estimation unit 23 instead of the load information acquisition unit 19 and the driving load estimation unit 20 of the voice learning system 10 of the above-described embodiment. The automatic driving possible section estimation unit 23 is also a conceptual classification of the speech learning system 10 focusing on functions, and does not necessarily exist physically independently. The automatic operation possible section estimation unit can be configured by various devices, electronic components, integrated circuits, computers, computer programs, or combinations thereof.
 自動運転可能区間推定部23は、移動予定取得部11によって取得された移動予定に基づき、出発地から目的地までの間で所定条件を満たして自動運転を実行することが可能な区間(以下、自動運転可能区間)を推定する。 Based on the travel schedule acquired by the travel schedule acquisition unit 11, the automatic operation possible section estimation unit 23 satisfies a predetermined condition between the departure point and the destination (hereinafter, referred to as “automatic driving”). Estimate the automatic operation possible section).
 この変形例では、自動運転を可能とする所定条件として、高速道路や自動車専用道路といった特定の道路種別が定められており、例えば、移動予定の出発地から目的地までの間に高速道路を走行する区間があれば、その区間を自動運転可能区間と推定する。 In this modified example, specific road types such as an expressway and a car-only road are defined as predetermined conditions for enabling automatic driving. For example, the vehicle travels on an expressway between the planned departure place and the destination. If there is a section to be operated, the section is estimated as an automatically operable section.
 高速道路は、交差点をなくしたり、カーブを緩やかに設計したりすることで高速走行を可能にしており、一般道路に比べて速度の変動や急ハンドルが少ないことから、自動運転に適している。 The highway is suitable for automatic driving because it eliminates intersections and gently designs curves so that it can travel at high speeds and has fewer speed fluctuations and sharp steering than ordinary roads.
 また、移動予定の天候や時間帯によって自動運転に適さない状況(例えば、雨天や暗い夜間など)であれば、たとえ高速道路を走行する区間であっても、自動運転可能区間ではないと推定する。 In addition, if it is a situation that is not suitable for automatic driving (for example, rainy weather or dark night) due to the scheduled weather or time zone, even if it is a section traveling on a highway, it is estimated that it is not an automatic driving enabled section .
 さらに、自動運転機能の故障が検知されている場合は、自動運転可能区間がないと推定する。 Furthermore, if a failure of the automatic driving function is detected, it is estimated that there is no section where automatic driving is possible.
 第1変形例の学習プログラム生成部15は、移動予定に基づいて乗車時間推定部12が出発地から目的地までの移動の所要時間を乗車時間として推定すると、その乗車時間よりも短い学習時間を設定する。そして、学習する利用者が運転者である場合、自動運転可能区間推定部23の推定に基づき、自動運転可能区間ではない区間(以下、手動運転区間)に比べて自動運転可能区間での学習の難易度を高めて学習プログラムを生成する。 When the boarding time estimation unit 12 estimates the travel time from the departure place to the destination as the boarding time based on the travel schedule, the learning program generation unit 15 of the first modified example has a learning time shorter than the boarding time. Set. And when the user who learns is a driver | operator, based on the estimation of the automatic driving | operation possible area estimation part 23, compared with the area (henceforth, manual driving area) which is not an automatic driving | operation possible area, Increase learning difficulty and generate learning program.
 図7には、第1変形例の音声学習システム10で実行される学習プログラム生成処理のフローチャートが示されている。第1変形例の学習プログラム生成処理(S106)では、まず、車両の移動中に行う学習であるか否かを判断し(S141)、移動中に行う学習ではない、すなわち、特定の場所で待機中に行う学習である場合は(S141:NO)、学習に集中できる状況にあることから、記憶している学習履歴に基づいて、未実行の学習要素を主体とした学習プログラムを生成する(S142)。 FIG. 7 shows a flowchart of a learning program generation process executed by the speech learning system 10 of the first modification. In the learning program generation process (S106) of the first modification, first, it is determined whether or not the learning is performed while the vehicle is moving (S141), and the learning is not performed while the vehicle is moving, that is, waiting at a specific place. If the learning is to be performed in the middle (S141: NO), it is possible to concentrate on the learning, so a learning program mainly composed of unexecuted learning elements is generated based on the stored learning history (S142). ).
 これに対して、移動中に行う学習である場合は(S141:YES)、移動予定に基づいて自動運転可能区間を推定する(S143)。前述したように自動運転が可能となる特定の道路種別として高速道路などが定められており、出発地から目的地までの間に高速道路を走行する場合は、高速道路の走行区間を自動運転可能区間として推定する。 On the other hand, when the learning is performed during the movement (S141: YES), the automatic driving possible section is estimated based on the movement schedule (S143). As described above, expressways are defined as specific types of roads that can be driven automatically. When driving on expressways between the departure point and the destination, it is possible to automatically drive the driving section of the expressway. Estimate as interval.
 ただし、雨や雪が降ったり、霧が発生したり、日没によって暗くなると、先行車両を検知する精度や車線を認識する精度が低下する傾向にある。また、雨などで路面が濡れて滑りやすい状況では先行車両に対して設定の車間距離を維持したり、車線を維持したりすることが困難なことがある。そこで、移動予定の天候や時間帯から自動運転に適さないと判断される区間は、自動運転可能区間から除外する。 However, when it rains or snows, fog occurs, or darkens due to sunset, the accuracy of detecting the preceding vehicle and the accuracy of detecting the lane tend to decrease. In addition, when the road surface is wet and slippery due to rain or the like, it may be difficult to maintain the set inter-vehicle distance or maintain the lane with respect to the preceding vehicle. Therefore, sections that are determined to be unsuitable for automatic driving due to the scheduled weather or time zone are excluded from the sections where automatic driving is possible.
 このように自動運転を可能とする所定条件として、特定の道路種別のような固定要件だけでなく、天候や時間帯といった変動要件を加味することで、安全性の高い自動運転を実現できる。 As a predetermined condition for enabling automatic driving in this way, not only fixed requirements such as a specific road type but also fluctuation requirements such as weather and time zone can be taken into account to realize highly safe automatic driving.
 こうして自動運転可能区間を推定すると、出発地から目的地までの間に自動運転可能区間があるか否かを判断する(S144)。自動運転可能区間がない場合は(S144:NO)、出発地から目的地まで全て運転者が手動で運転することになり、運転者は運転に注意を向けながら学習しなければならない。そこで、学習の難易度を下げるために、記憶している学習履歴に基づいて、実行済みの学習要素を主体とした学習プログラムを生成する(S145)。 Thus, when the automatic driving available section is estimated, it is determined whether there is an automatic driving available section from the departure place to the destination (S144). If there is no section where automatic driving is possible (S144: NO), the driver will drive manually from the departure point to the destination, and the driver must learn while paying attention to driving. Therefore, in order to reduce the difficulty level of learning, a learning program mainly composed of already executed learning elements is generated based on the stored learning history (S145).
 一方、出発地から目的地までの間に自動運転可能区間がある場合は(S144:YES)、出発地から目的地までの間のうち手動運転区間に対応する部分の学習プログラムを、主に実行済みの学習要素で構成し(S146)、自動運転可能区間に対応する部分の学習プログラムを、主に未実行の学習要素で構成する(S147)。 On the other hand, if there is an automatically operable section between the departure point and the destination (S144: YES), the learning program for the portion corresponding to the manual operation section is mainly executed between the departure point and the destination. The learning program of the part corresponding to the automatic driving possible section is mainly configured with the unexecuted learning elements (S147).
 自動運転可能区間では、アクセルやブレーキやハンドルの操作が自動で行われることで、運転者の負担が大幅に軽減されるので、運転者は学習に意識を向けることが可能となる。そこで、手動運転区間と自動運転可能区間とで学習要素の主体を変え、手動運転区間に比べて自動運転可能区間での学習の難易度を高くする。 In the section where automatic driving is possible, the driver's burden is greatly reduced by automatically operating the accelerator, brakes and steering wheel, so the driver can focus on learning. Therefore, the subject of the learning element is changed between the manual driving section and the automatic driving enabled section, and the difficulty of learning in the automatic driving enabled section is increased as compared with the manual driving section.
 以上のようにして学習の環境が移動中と待機中のどちらであるか、移動中の学習の場合に自動運転可能区間の有無に応じて学習プログラムを生成すると、続いて、生成した学習プログラムに含まれる学習要素の中から組み合わせて、余裕時間内で終了する復習プログラムを生成する(S148)。 As described above, when a learning program is generated depending on whether the learning environment is moving or waiting, and in the case of learning while moving, whether or not there is an automatically operable section, then the generated learning program A review program that ends within the spare time is generated by combining the included learning elements (S148).
 また、復習プログラムを生成したら、図7の学習プログラム生成処理を終了して、図2の音声学習制御処理に復帰する。 When the review program is generated, the learning program generation process in FIG. 7 is terminated and the process returns to the speech learning control process in FIG.
 尚、第1変形例の音声学習制御処理では、運転者の運転負荷を推定しておらず、その後の学習プログラム実行処理(S109)で学習プログラムを始動すると、移動中か否かに拘わらず学習プログラムが終了するまで待機し、学習プログラムが全て終了したら、学習プログラム実行処理(S109)を終了する。 In the voice learning control process of the first modified example, the driving load of the driver is not estimated, and if the learning program is started in the subsequent learning program execution process (S109), the learning is performed regardless of whether the driver is moving or not. Wait until the program is finished, and when all the learning programs are finished, the learning program execution process (S109) is finished.
 図8には、第1変形例の音声学習システム10によって車両内の運転者が学習する例が模式的に示されている。図中の横軸は時間の流れを示し、右方向に進むものとする。 FIG. 8 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the first modified example. The horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
 図示した例では、車両で移動する所要時間を乗車時間(T21)として、乗車時間よりも所定の余裕時間(T23)だけ短い学習時間(T22)が設定される。そして、その学習時間に合わせて生成された学習プログラムが車両の移動開始を契機に実行され、学習プログラムの終了後には、運転者の復習要求に応じて、余裕時間内で終了するように生成された復習プログラムが実行される。 In the illustrated example, the travel time (T21) that is shorter than the boarding time by a predetermined margin time (T23) is set with the required time to travel by the vehicle as the boarding time (T21). Then, the learning program generated in accordance with the learning time is executed when the vehicle starts moving, and after the learning program ends, the learning program is generated so as to end within the spare time according to the driver's review request. A review program is executed.
 また、移動経路上には、高速道路を走行する自動運転可能区間(Tauto)が存在している。そのため、学習プログラムの生成に際して、手動運転区間(すなわち、自動運転可能区間ではない区間)に対応する部分は、実行済みの学習要素を主体に構成することで学習の難易度を低く設定し、自動運転可能区間に対応する部分は、未実行の学習要素を主体に構成することで学習の難易度を高く設定する。 In addition, there is an automatically operable section (Tauto) traveling on the highway on the moving route. Therefore, when generating the learning program, the portion corresponding to the manual operation section (that is, the section that is not the automatic operation enabled section) is set to a learning difficulty level low by configuring mainly the learning elements that have already been executed. The portion corresponding to the drivable section is configured with a learning difficulty level by mainly configuring an unexecuted learning element.
 このように第1変形例の音声学習システム10では、移動予定に基づいて自動運転可能区間を推定し、手動運転区間と自動運転可能区間とで学習プログラムを構成する学習要素の主体を変え、手動運転区間に比べて自動運転可能区間での学習の難易度を高めるようになっている。これにより、運転者は、手動運転区間では運転に注意を向けながら復習的に学習しつつ、運転の負担が軽減される自動運転可能区間では積極的に新たな内容を取り入れて効率的に学習を進行させることが可能となる。 As described above, in the speech learning system 10 according to the first modified example, the automatic driving possible section is estimated based on the movement schedule, the subject of the learning element constituting the learning program is changed between the manual driving section and the automatic driving enabled section, and the manual operation section is changed. Compared to the driving section, the difficulty of learning in the section where automatic driving is possible is increased. In this way, the driver learns refreshingly while focusing attention on driving in the manual driving section, and learns efficiently by actively incorporating new content in the automatic driving section where the driving burden is reduced. It is possible to proceed.
 D-2.第2変形例:
 上述した第1変形例では、手動運転区間と自動運転可能区間とで音声学習の難易度を異ならせていた。しかし、自動運転可能区間で集中的に音声学習を実行してもよい。
D-2. Second modification:
In the first modification described above, the difficulty level of voice learning is different between the manual operation section and the automatic operation possible section. However, the speech learning may be executed intensively in the section where automatic driving is possible.
 第2変形例の音声学習システム10は、上述した第1変形例(図6参照)と同様に、自動運転可能区間推定部23を備えている。 The speech learning system 10 of the second modified example includes an automatic driving possible section estimating unit 23 as in the first modified example (see FIG. 6) described above.
 自動運転可能区間推定部23は、移動予定取得部11によって取得された移動予定に基づき、出発地から目的地までの間で所定条件を満たす自動運転可能区間を推定する。尚、自動運転を可能とする所定条件は、第1変形例と同様である。 The automatic driving possible section estimation unit 23 estimates an automatic driving capable section satisfying a predetermined condition from the departure point to the destination based on the movement schedule acquired by the movement schedule acquisition unit 11. The predetermined condition for enabling automatic operation is the same as in the first modification.
 そして、第2変形例の乗車時間推定部12は、自動運転可能区間推定部23によって自動運転可能区間が推定されると、その自動運転可能区間を移動するのに要する時間を乗車時間として推定する。 And the boarding time estimation part 12 of a 2nd modification will estimate the time required to move the automatic driving | operation possible area as boarding time, if the automatic driving | operation possible area estimation part 23 estimates the automatic driving | operation possible area. .
 また、学習プログラム生成部15は、乗車時間推定部12で推定された乗車時間よりも短い学習時間を設定し、その学習時間に合わせて学習プログラムを生成する。 Further, the learning program generation unit 15 sets a learning time shorter than the boarding time estimated by the boarding time estimation unit 12, and generates a learning program in accordance with the learning time.
 図9には、第2変形例の音声学習システム10で実行される音声学習制御処理のフローチャートが示されている。第2変形例の音声学習制御処理(S200)を開始すると、まず、車両の移動予定を取得したか否かを判断する(S201)。移動予定を取得していない場合は(S201:NO)、続いて、車両の移動履歴を参照して、特定の場所で待機中であるか否かを判断する(S202)。 FIG. 9 shows a flowchart of the speech learning control process executed by the speech learning system 10 of the second modified example. When the voice learning control process (S200) of the second modified example is started, first, it is determined whether or not a moving schedule of the vehicle has been acquired (S201). If the travel schedule has not been acquired (S201: NO), it is subsequently determined by referring to the travel history of the vehicle whether the vehicle is waiting at a specific location (S202).
 そして、特定の場所で待機中である場合は(S202:YES)、移動履歴からの行動予測に基づく待機時間を乗車時間として推定する(S203)。 And when waiting in a specific place (S202: YES), the waiting time based on the behavior prediction from the movement history is estimated as the boarding time (S203).
 一方、特定の場所で待機中ではない場合は(S202:NO)、音声学習制御処理(S200)の先頭に戻り、車両の移動予定を取得したか否かを再び判断する(S201)。 On the other hand, if the vehicle is not waiting at a specific place (S202: NO), the process returns to the beginning of the voice learning control process (S200), and it is determined again whether or not the vehicle movement schedule has been acquired (S201).
 そして、移動予定を取得した場合は(S201:YES)、移動予定に基づいて自動運転可能区間を推定する(S204)。尚、自動運転可能区間を推定する処理は、前述した第1変形例(図7のS143参照)と同様である。 And when a movement schedule is acquired (S201: YES), an automatic driving | operation possible area is estimated based on a movement schedule (S204). In addition, the process which estimates an automatic driving | operation possible area is the same as that of the 1st modification mentioned above (refer S143 of FIG. 7).
 自動運転可能区間を推定したら、出発地から目的地までの間に自動運転可能区間があるか否かを判断する(S205)。第2変形例の音声学習システム10では、出発地から目的地までの間のうち自動運転可能区間で集中的に音声学習を実行するようになっている。そのため、自動運転可能区間がない場合は(S205:NO)、音声学習を実行することなく、そのまま音声学習制御処理を終了する。 When the automatic driving available section is estimated, it is determined whether there is an automatic driving available section from the departure point to the destination (S205). In the voice learning system 10 of the second modification, the voice learning is intensively performed in the section where the automatic driving is possible from the starting point to the destination. For this reason, if there is no section where automatic driving is possible (S205: NO), the speech learning control process is terminated without executing speech learning.
 これに対して、出発地から目的地までの間に自動運転可能区間がある場合は(S205:YES)、自動運転可能区間を移動する所要時間(例えば、高速道路の走行時間)を乗車時間として推定する(S206)。 On the other hand, when there is a section where the automatic driving is possible between the departure point and the destination (S205: YES), the required time for moving the automatic driving section (for example, driving time on the highway) is set as the boarding time. Estimate (S206).
 こうして乗車時間を推定すると、乗車時間よりも所定の余裕時間だけ短い学習時間を設定し(S207)、その学習時間に合わせて終了する1回分の学習プログラムを、記憶されている複数の学習要素の中から組み合わせて生成する(S208)。また、生成した学習プログラムに含まれる学習要素の中から組み合わせて、余裕時間内で終了する復習プログラムを生成する(S209)。 When the boarding time is estimated in this way, a learning time shorter than the boarding time by a predetermined margin time is set (S207), and a learning program for one time that ends in accordance with the learning time is stored in a plurality of stored learning elements. A combination is generated from the inside (S208). In addition, a review program that ends within the spare time is generated by combining the learning elements included in the generated learning program (S209).
 尚、第2変形例では、特定の場所で待機中に行う学習と、移動中の自動運転可能区間で行う学習とで、同様に学習プログラムを生成するようになっているが、難易度の異なる学習プログラムを生成するようにしてもよい。 In the second modified example, a learning program is generated in the same way for learning performed while waiting at a specific place and learning performed during an automatic driving enabled section, but the degree of difficulty is different. A learning program may be generated.
 続いて、推定した乗車時間、および設定した学習時間を車両内の利用者に通知すると(S210)、音声学習の開始条件が成立したか否かを判断する(S211)。第2変形例では、特定の場所で待機中の学習を操作スイッチ21で開始要求操作が検出されることで開始し、移動中の学習を車両が自動運転可能区間に入ることで開始するようになっている。 Subsequently, when the estimated boarding time and the set learning time are notified to the user in the vehicle (S210), it is determined whether or not the voice learning start condition is satisfied (S211). In the second modification, learning waiting in a specific place is started when a start request operation is detected by the operation switch 21, and learning during movement is started when the vehicle enters an automatically drivable section. It has become.
 そして、開始条件が成立していない場合は(S211:NO)、そのまま待機し、開始条件が成立した場合は(S211:YES)、学習プログラムを始動する(S212)。 If the start condition is not satisfied (S211: NO), the process waits as it is, and if the start condition is satisfied (S211: YES), the learning program is started (S212).
 学習プログラムを始動したら、学習プログラムが終了したか否かを判断し(S213)、学習プログラムが未だ終了していない場合は(S213:NO)、そのまま待機する。そして、学習プログラムが全て終了した場合は(S213:YES)、操作スイッチ21で復習要求操作が検出されたか否かを判断する(S214)。 When the learning program is started, it is determined whether or not the learning program has ended (S213). If the learning program has not ended yet (S213: NO), the process stands by. If all the learning programs are completed (S213: YES), it is determined whether or not a review request operation is detected by the operation switch 21 (S214).
 復習要求操作が検出された場合は(S214:YES)、復習プログラムを始動する(S215)。続いて、復習プログラムが終了したか否かを判断し(S216)、復習プログラムが未だ終了していない場合は(S216:NO)、そのまま待機する。そして、復習プログラムが全て終了した場合は(S216:YES)、音声学習制御処理を終了する。 If the review request operation is detected (S214: YES), the review program is started (S215). Subsequently, it is determined whether or not the review program has ended (S216). If the review program has not ended yet (S216: NO), the process stands by. And when all the review programs are complete | finished (S216: YES), a speech learning control process is complete | finished.
 一方、復習要求操作が所定時間内に検出されない場合は(S214:NO)、利用者に復習の意思がないものと判断し、S215~S216の処理を省略して、音声学習制御処理を終了する。 On the other hand, if the review request operation is not detected within the predetermined time (S214: NO), it is determined that the user does not intend to review, the processes of S215 to S216 are omitted, and the voice learning control process is terminated. .
 図10には、第2変形例の音声学習システム10によって車両内の運転者が学習する例が模式的に示されている。図中の横軸は時間の流れを示し、右方向に進むものとする。 FIG. 10 schematically shows an example in which the driver in the vehicle learns by the speech learning system 10 of the second modified example. The horizontal axis in the figure indicates the flow of time and proceeds in the right direction.
 図示した例では、移動予定の出発地から目的地までの間に、高速道路を走行する自動運転可能区間(Tauto)が存在する。そのため、自動運転可能区間を移動する所要時間を乗車時間(T31)として、この乗車時間よりも所定の余裕時間(T33)だけ短い学習時間(T32)が設定される。 In the example shown in the figure, there is an automatically operable section (Tauto) that travels on the expressway between the planned departure place and the destination. For this reason, the travel time (T31) is set as the travel time (T31), and a learning time (T32) shorter than the travel time by a predetermined margin time (T33) is set.
 そして、学習時間に合わせて生成された学習プログラムが、車両が移動して自動運転可能区間に入ったことを契機に実行される。また、学習プログラムの終了後には、運転者の復習要求に応じて、余裕時間内で終了するように生成された復習プログラムが実行される。 Then, the learning program generated in accordance with the learning time is executed when the vehicle moves and enters the automatic driving enabled section. In addition, after the learning program is finished, the review program generated to finish within the spare time is executed in response to the driver's review request.
 このように第2変形例の音声学習システム10では、移動予定に基づいて自動運転可能区間の推定し、車両の移動中は自動運転可能区間で集中的に音声学習を実行するようになっている。車両が自動運転可能区間に入って自動運転に切り換わると、車両の各種操作が自動で行われることで、運転者の負担が大幅に軽減されて運転者に余裕ができるので、自動運転中は音声学習を行うのに特に適している。そのため、自動運転可能区間で集中的に音声学習を実行することにより、運転者は安全かつ効果的に音声学習を進めることが可能となる。 As described above, in the speech learning system 10 of the second modified example, the automatic driving possible section is estimated based on the moving schedule, and the voice learning is intensively executed in the automatic driving possible section while the vehicle is moving. . When the vehicle enters the autonomous driving section and switches to automatic driving, various operations of the vehicle are performed automatically, which greatly reduces the burden on the driver and allows the driver to afford, so during automatic driving Especially suitable for performing speech learning. Therefore, the driver can advance the voice learning safely and effectively by executing the voice learning intensively in the automatic driving enabled section.
 以上、実施例および変形例について説明したが、本開示は上記の実施例および変形例に限られるものではなく、その要旨を逸脱しない範囲において種々の態様で実施することができる。 As mentioned above, although an Example and a modification were demonstrated, this indication is not restricted to said Example and a modification, It can implement in a various aspect in the range which does not deviate from the summary.
 例えば、前述した実施例では、運転負荷を高低の2段階に分けて推定するようになっていたが、運転負荷を多段階(例えば負荷1~4の4段階)に分けて推定するようにしてもよい。この場合は、推定した運転負荷が高いほど、難易度の低い(未実行の学習要素が少ない)学習プログラムを生成することとしてもよい。 For example, in the above-described embodiment, the driving load is estimated in two stages of high and low, but the driving load is estimated in multiple stages (for example, four stages of loads 1 to 4). Also good. In this case, as the estimated driving load is higher, a learning program having a lower difficulty level (a smaller number of unexecuted learning elements) may be generated.
 また、前述した実施例では、移動の所要時間や特定の場所での待機時間などの利用者が車両内にいる時間(乗車時間)を推定するようになっていたが、車両の移動履歴を参照して、移動と移動との間で利用者が車両から離れている時間を推定してもよい。例えば、車両を運転して子供を塾まで送り届けた母親が一旦帰宅し、塾が終わる時間に再び迎えに行くことが習慣になっていれば、移動履歴からの行動予測に基づいて一旦帰宅している時間(中抜け時間)を推定する。そして、中抜け時間よりも短い学習時間の学習プログラムを生成し、母親の所有する携帯端末で学習できるようにすれば、中抜け時間を利用した継続的な学習の促進が可能となる。また、本開示は、上記の音声学習システムに限定されるものではなく、音声学習方法として提供されてもよい。 Further, in the above-described embodiment, the time (riding time) in which the user is in the vehicle, such as the time required for movement and the waiting time at a specific place, is estimated, but refer to the movement history of the vehicle. Then, the time during which the user is away from the vehicle between the movements may be estimated. For example, if it is customary for a mother who has driven a vehicle to deliver a child to a cram school and return to the school once it is customary to return to the cram school, he / she should return home based on behavior predictions from the movement history. Estimated time (outgoing time). If a learning program having a learning time shorter than the hollow time is generated and can be learned by the mobile terminal owned by the mother, continuous learning using the hollow time can be promoted. Further, the present disclosure is not limited to the above-described speech learning system, and may be provided as a speech learning method.
 本開示は、実施例に準拠して記述されたが、本開示は当該実施例や構造に限定されるものではないと理解される。本開示は、様々な変形例や均等範囲内の変形をも包含する。加えて、様々な組み合わせや形態、さらには、それらに一要素のみ、それ以上、あるいはそれ以下、を含む他の組み合わせや形態をも、本開示の範畴や思想範囲に入るものである。

 
Although the present disclosure has been described with reference to the embodiments, it is understood that the present disclosure is not limited to the embodiments and structures. The present disclosure includes various modifications and modifications within the equivalent range. In addition, various combinations and forms, as well as other combinations and forms including only one element, more or less, are within the scope and spirit of the present disclosure.

Claims (11)

  1.  車両に適用されて、該車両内の利用者に対して学習コンテンツを音声によって供する音声学習システムであって、
     前記学習コンテンツを構成する複数の学習要素を記憶する学習要素記憶部(14)と、
     前記利用者が前記車両に乗車している乗車時間を推定する乗車時間推定部(12)と、
     前記乗車時間推定部で推定された前記乗車時間内で終了する1回分の学習プログラムを、前記複数の学習要素の中から組み合わせて生成する学習プログラム生成部(15)と、
     前記学習プログラムを実行する実行部(17)と
     を備える音声学習システム。
    A speech learning system that is applied to a vehicle and provides a learning content by voice to a user in the vehicle,
    A learning element storage unit (14) for storing a plurality of learning elements constituting the learning content;
    A boarding time estimating unit (12) for estimating a boarding time in which the user is on the vehicle;
    A learning program generating unit (15) for generating a learning program for one time that ends within the boarding time estimated by the boarding time estimating unit in combination from the plurality of learning elements;
    A speech learning system comprising: an execution unit (17) that executes the learning program.
  2.  請求項1に記載の音声学習システムであって、
     前記車両の移動予定を取得する移動予定取得部(11)を備え、
     前記乗車時間推定部は、前記移動予定に基づいて出発地から目的地までの所要時間を、前記乗車時間として推定する
     音声学習システム。
    The speech learning system according to claim 1,
    A travel plan acquisition unit (11) for acquiring the travel plan of the vehicle;
    The said boarding time estimation part estimates the required time from a departure place to the destination based on the said movement schedule as the said boarding time. The audio | voice learning system.
  3.  請求項2に記載の音声学習システムであって、
     前記実行部は、前記車両が移動を開始すると、自動的に前記学習プログラムを実行する
     音声学習システム。
    The speech learning system according to claim 2,
    The execution unit automatically executes the learning program when the vehicle starts moving.
  4.  請求項2または請求項3に記載の音声学習システムであって、
     前記車両の運転者の運転負荷を推定するための負荷情報を取得する負荷情報取得部(19)と、
     前記負荷情報に基づいて前記運転者の運転負荷を推定する運転負荷推定部(20)と
     を備え、
     前記学習プログラム生成部は、前記運転者を前記利用者として、前記出発地から前記目的地までの前記運転負荷に応じて異なる前記学習プログラムを生成する
     音声学習システム。
    The speech learning system according to claim 2 or 3, wherein
    A load information acquisition unit (19) for acquiring load information for estimating the driving load of the driver of the vehicle;
    A driving load estimation unit (20) for estimating the driving load of the driver based on the load information,
    The said learning program production | generation part produces | generates the said learning program which changes according to the said driving load from the said departure place to the said destination by making the said driver into the said user. Voice learning system.
  5.  請求項4に記載の音声学習システムであって、
     前記実行部は、前記車両の移動中に推定された前記運転負荷が所定負荷よりも高くなると、前記学習プログラムの実行を中断した後、前記運転負荷が前記所定負荷よりも低くなると、中断された前記学習要素の先頭に戻って前記学習プログラムの実行を再開する
     音声学習システム。
    The speech learning system according to claim 4,
    The execution unit is interrupted when the driving load is lower than the predetermined load after the execution of the learning program is interrupted when the driving load estimated during the movement of the vehicle is higher than the predetermined load. A speech learning system that returns to the top of the learning element and resumes execution of the learning program.
  6.  請求項2または請求項3に記載の音声学習システムであって、
     前記車両は、移動中に所定条件を満たすと、当該車両の自動運転を可能とする自動運転機能を有し、
     前記移動予定に基づいて前記出発地から前記目的地までの間で前記所定条件を満たす自動運転可能区間を推定する自動運転可能区間推定部(23)を備え、
     前記学習プログラム生成部は、前記車両の運転者を前記利用者として、前記自動運転可能区間ではない区間に比べて該自動運転可能区間での学習の難易度を高めて前記学習プログラムを生成する
     音声学習システム。
    The speech learning system according to claim 2 or 3, wherein
    The vehicle has an automatic driving function that enables automatic driving of the vehicle when a predetermined condition is satisfied during movement,
    An automatic drivable section estimation unit (23) for estimating a drivable section that satisfies the predetermined condition between the departure point and the destination based on the travel schedule;
    The learning program generation unit generates the learning program by using the vehicle driver as the user and increasing the difficulty of learning in the automatically drivable section as compared to a section that is not the automatic drivable section. Learning system.
  7.  請求項1に記載の音声学習システムであって、
     前記車両は、移動中に所定条件を満たすと、当該車両の自動運転を可能とする自動運転機能を有し、
     前記車両の移動予定を取得する移動予定取得部(11)と、
     前記移動予定に基づいて出発地から目的地までの間で前記所定条件を満たす自動運転可能区間を推定する自動運転可能区間推定部(23)と
     を備え、
     前記乗車時間推定部は、前記車両が前記自動運転可能区間を移動する所要時間を、前記乗車時間として推定し、
     前記実行部は、前記自動運転可能区間内で前記学習プログラムを実行する
     音声学習システム。
    The speech learning system according to claim 1,
    The vehicle has an automatic driving function that enables automatic driving of the vehicle when a predetermined condition is satisfied during movement,
    A movement schedule acquisition unit (11) for acquiring a movement schedule of the vehicle;
    An automatically drivable section estimation unit (23) for estimating a drivable section that satisfies the predetermined condition between the departure point and the destination based on the travel schedule, and
    The boarding time estimation unit estimates the time required for the vehicle to move in the automatically drivable section as the boarding time,
    The execution unit executes the learning program in the automatic driving enabled section.
  8.  請求項1に記載の音声学習システムであって、
     前記利用者が前記車両で移動した移動履歴を記憶する移動履歴記憶部(13)を備え、
     前記乗車時間推定部は、前記移動履歴に基づいて、特定の場所に移動した前記車両内で前記利用者が待機している時間を、前記乗車時間として推定する
     音声学習システム。
    The speech learning system according to claim 1,
    A movement history storage unit (13) for storing a movement history of the user moving with the vehicle;
    The said boarding time estimation part estimates the time when the said user is waiting in the said vehicle which moved to the specific place as said boarding time based on the said movement history. Voice learning system.
  9.  請求項1ないし請求項8の何れか一項に記載の音声学習システムであって、
     前記乗車時間推定部で推定された前記乗車時間と、前記学習プログラム生成部で生成された前記学習プログラムの学習時間とを前記利用者に通知する通知部(16)を備える
     音声学習システム。
    The speech learning system according to any one of claims 1 to 8,
    A speech learning system comprising: a notification unit (16) that notifies the user of the boarding time estimated by the boarding time estimation unit and the learning time of the learning program generated by the learning program generation unit.
  10.  請求項1ないし請求項9の何れか一項に記載の音声学習システムであって、
     前記学習プログラム生成部は、前記乗車時間推定部で推定された前記乗車時間よりも所定の余裕時間だけ短い学習時間の前記学習プログラムを生成すると共に、該余裕時間内で終了する復習プログラムを、該学習プログラムに含まれる前記学習要素の中から生成し、
     前記実行部は、前記学習プログラムを終了した後、前記利用者からの復習要求を受けて、前記復習プログラムを実行する
     音声学習システム。
    The speech learning system according to any one of claims 1 to 9,
    The learning program generation unit generates the learning program having a learning time shorter than the boarding time estimated by the boarding time estimation unit by a predetermined margin time, and a review program that ends within the margin time. Generate from the learning elements included in the learning program,
    The execution unit, after finishing the learning program, receives the review request from the user and executes the review program.
  11.  車両に適用されて、該車両内の利用者に対して学習コンテンツを音声によって供する音声学習方法であって、
     前記利用者が前記車両に乗車している乗車時間を推定し(S102,S104)、
     前記学習コンテンツを構成する予め記憶された複数の学習要素の中から組み合わせて、前記乗車時間内で終了する1回分の学習プログラムを生成し(S106)、
     前記学習プログラムを実行する(S109)こと
     を含む音声学習方法。

     
    A speech learning method applied to a vehicle and providing a learning content by voice to a user in the vehicle,
    Estimating the boarding time that the user has been in the vehicle (S102, S104),
    A learning program for one time that ends within the boarding time is generated by combining from a plurality of pre-stored learning elements constituting the learning content (S106),
    A speech learning method including executing the learning program (S109).

PCT/JP2015/006369 2015-01-19 2015-12-22 Speech learning system and speech learning method WO2016116992A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/542,810 US11164472B2 (en) 2015-01-19 2015-12-22 Audio learning system and audio learning method

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2015008175 2015-01-19
JP2015-008175 2015-01-19
JP2015-152126 2015-07-31
JP2015152126A JP6443257B2 (en) 2015-01-19 2015-07-31 Speech learning system, speech learning method

Publications (1)

Publication Number Publication Date
WO2016116992A1 true WO2016116992A1 (en) 2016-07-28

Family

ID=56416561

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/006369 WO2016116992A1 (en) 2015-01-19 2015-12-22 Speech learning system and speech learning method

Country Status (1)

Country Link
WO (1) WO2016116992A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365061A (en) * 2001-06-11 2002-12-18 Pioneer Electronic Corp Control apparatus and method of electronic system for mobile unit, the electronic system for mobile unit, and computer program
JP2011085641A (en) * 2009-10-13 2011-04-28 Power Shift Inc Language learning support system and language learning support method
JP2015017944A (en) * 2013-07-12 2015-01-29 株式会社デンソー Automatic operation support device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002365061A (en) * 2001-06-11 2002-12-18 Pioneer Electronic Corp Control apparatus and method of electronic system for mobile unit, the electronic system for mobile unit, and computer program
JP2011085641A (en) * 2009-10-13 2011-04-28 Power Shift Inc Language learning support system and language learning support method
JP2015017944A (en) * 2013-07-12 2015-01-29 株式会社デンソー Automatic operation support device

Similar Documents

Publication Publication Date Title
JP6469635B2 (en) Vehicle control device
US11577742B2 (en) Methods and systems for increasing autonomous vehicle safety and flexibility using voice interaction
JP6508072B2 (en) Notification control apparatus and notification control method
US9747898B2 (en) Interpretation of ambiguous vehicle instructions
JP5115354B2 (en) Driving assistance device
WO2017006651A1 (en) Automatic driving control device
JP2018105692A (en) Automatic driving system
US20170221480A1 (en) Speech recognition systems and methods for automated driving
JP7085859B2 (en) Vehicle control unit
JP6443257B2 (en) Speech learning system, speech learning method
JP6604577B2 (en) Driving support method, driving support apparatus, driving support system, automatic driving control apparatus, vehicle and program using the same
JP2018041328A (en) Information presentation device for vehicle
JP7211707B2 (en) Agent cooperation method
JP2020052658A (en) Automatic driving system
US11462103B2 (en) Driver-assistance device, driver-assistance system, and driver-assistance program
JP2019148850A (en) Vehicle controller
WO2016116992A1 (en) Speech learning system and speech learning method
JP7331875B2 (en) Presentation controller and presentation control program
JPH11126089A (en) Voice interaction device
JP2021160708A (en) Presentation control device, presentation control program, automatic travel control system and automatic travel control program
JP7044295B2 (en) Automatic operation control device, automatic operation control method, and program
WO2021199964A1 (en) Presentation control device, presentation control program, automated driving control system, and automated driving control program
WO2023026718A1 (en) Presentation control device, presentation control program, autonomous driving control device, and autonomous driving control program
JP7394904B2 (en) Vehicle control device, vehicle control method, and program
JP7327426B2 (en) VEHICLE DISPLAY DEVICE AND DISPLAY METHOD

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15878688

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15542810

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15878688

Country of ref document: EP

Kind code of ref document: A1