CN111724798A - Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium - Google Patents

Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium Download PDF

Info

Publication number
CN111724798A
CN111724798A CN202010189106.6A CN202010189106A CN111724798A CN 111724798 A CN111724798 A CN 111724798A CN 202010189106 A CN202010189106 A CN 202010189106A CN 111724798 A CN111724798 A CN 111724798A
Authority
CN
China
Prior art keywords
vehicle
vehicle device
unit
occupant
device control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010189106.6A
Other languages
Chinese (zh)
Other versions
CN111724798B (en
Inventor
荒川桂辅
尾中润一郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honda Motor Co Ltd
Original Assignee
Honda Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honda Motor Co Ltd filed Critical Honda Motor Co Ltd
Publication of CN111724798A publication Critical patent/CN111724798A/en
Application granted granted Critical
Publication of CN111724798B publication Critical patent/CN111724798B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/037Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for occupant comfort, e.g. for automatic adjustment of appliances according to personal settings, e.g. seats, mirrors, steering wheel
    • B60R16/0373Voice control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • G10L17/24Interactive procedures; Man-machine interfaces the user being prompted to utter a password or a predefined phrase

Abstract

The invention provides a vehicle-mounted device control system, a vehicle-mounted device control apparatus, a vehicle-mounted device control method, and a storage medium. The in-vehicle device control system includes: an acquisition unit that acquires a sound including speech content of an occupant riding in a vehicle; an in-vehicle device control unit; a voice recognition unit that recognizes a voice; a determination unit that determines an in-vehicle device that instructs an action; a determination unit that determines whether or not the specified in-vehicle device belongs to a predetermined group; and a general switch, wherein when the in-vehicle device receiving the instruction is an in-vehicle device belonging to a predetermined group, the in-vehicle device control unit outputs at least one of a sound asking whether or not the operation is executable and an approval promotion image prompting approval to execute the operation through a speaker or a display unit, and when an input indicating approval is received through the general switch, controls the operation of the in-vehicle device receiving the instruction.

Description

Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium
Technical Field
The invention relates to a vehicle-mounted device control system, a vehicle-mounted device control apparatus, a vehicle-mounted device control method, and a storage medium.
Background
The research on human-machine interfaces that provide information by voice dialog with a person is constantly progressing. In connection with this, there are known a technique of determining whether or not a person who is a target of communication with a robot speaks, a speaking volume, and a speaking tone based on the situation of the person, and a technique of recognizing a voice spoken by an occupant using a dictionary in which words are registered and controlling a plurality of control target devices in a vehicle cabin according to the contents of the recognized voice (for example, refer to japanese patent No. 4976903 and japanese patent application laid-open No. 2007-286136).
Summary of the invention
Problems to be solved by the invention
However, in the conventional technology, for example, in the case where a plurality of occupants are present in the vehicle compartment, it is sometimes difficult to reliably instruct the operation of the vehicle-mounted device that is permitted only by a specific occupant (for example, the driver of the vehicle) by the sound of the occupant speaking.
Disclosure of Invention
An object of an aspect of the present invention is to provide an in-vehicle device control system, an in-vehicle device control apparatus, an in-vehicle device control method, and a storage medium that can reliably instruct an operation that is permitted only by a specific occupant and reduce the burden on a driving person in charge of the vehicle concerned with the instruction.
Means for solving the problems
The in-vehicle device control system, the in-vehicle device control apparatus, the in-vehicle device control method, and the storage medium according to the present invention have the following configurations.
(1): an in-vehicle device control system according to an aspect of the present invention includes: an acquisition unit that acquires a sound including speech content of an occupant riding in a vehicle; an in-vehicle device control unit mounted on the vehicle and controlling an operation of an in-vehicle device including a speaker and a display unit; a voice recognition unit that recognizes a voice including the speech content of the occupant of the vehicle acquired by the acquisition unit; a determination section that determines the in-vehicle apparatus that indicates an action by the voice of the occupant recognized by the voice recognition section; a determination unit that determines whether or not the specified in-vehicle device belongs to a predetermined group; and a general-purpose switch, wherein, when the determination unit determines that the in-vehicle device that received the instruction is the in-vehicle device belonging to the predetermined group, the in-vehicle device control unit outputs at least one of a sound asking whether or not the in-vehicle device belonging to the predetermined group can execute an operation corresponding to the instruction and a promotion approval image prompting approval to execute the operation in the in-vehicle device belonging to the predetermined group via the speaker or the display unit, and controls the operation of the in-vehicle device that received the instruction when an input indicating approval to execute the instruction by the occupant is received via the general-purpose switch.
(2): in the aspect (1) described above, the in-vehicle device belonging to the predetermined group is an in-vehicle device that affects the behavior of the vehicle.
(3): in addition to the aspect (1) described above, the in-vehicle device belonging to the predetermined group is an in-vehicle device corresponding to an operation permitted only by a driver in the vehicle.
(4): in addition to the aspect (1), the general-purpose switch may be used for other purposes than a scene in which a predetermined input related to a voice instruction including an input indicating the agreement is accepted.
(5): in the aspect (4) described above, when an input indicating a start of receiving a voice is received via the general-purpose switch, the voice recognition unit starts recognizing a voice including the speech content of the occupant collected by the microphone serving as the acquisition unit.
(6): in addition to any one of the above aspects (1) to (5), the universal switch is provided on a steering wheel.
(7): in addition to any one of the above items (1) to (6), the in-vehicle device control system may further include a switch that causes the in-vehicle device control unit to control an operation of the in-vehicle device belonging to the group other than the predetermined group when it is determined that the in-vehicle device that has received the instruction is an in-vehicle device other than the in-vehicle device belonging to the predetermined group.
(8): an in-vehicle device control device according to an aspect of the present invention includes: an acquisition unit that acquires a sound including speech content of an occupant riding in a vehicle; an in-vehicle device control unit mounted on the vehicle and controlling an operation of an in-vehicle device including a speaker and a display unit; a voice recognition unit that recognizes a voice including the speech content of the occupant acquired by the acquisition unit; a determination section that determines the in-vehicle apparatus that indicates an action by the voice of the occupant recognized by the voice recognition section; a determination unit that determines whether or not the specified in-vehicle device belongs to a predetermined group; and a general-purpose switch, wherein, when the determination unit determines that the in-vehicle device that received the instruction is the in-vehicle device belonging to the predetermined group, the in-vehicle device control unit outputs at least one of a sound asking whether or not the in-vehicle device belonging to the predetermined group can execute an operation corresponding to the instruction and a promotion approval image prompting approval to execute the operation in the in-vehicle device belonging to the predetermined group via the speaker or the display unit, and controls the operation of the in-vehicle device that received the instruction when an input indicating approval to execute the instruction by the occupant is received via the general-purpose switch.
(9): an in-vehicle device control method according to an aspect of the present invention causes a single or a plurality of computers in an in-vehicle device control system including an acquisition unit that acquires a voice including speech content of an occupant of a vehicle and a general-purpose switch to execute the steps of: identifying a sound comprising speech content of the occupant; determining an in-vehicle device indicating an action by the recognized sound of the occupant; determining whether the determined vehicle-mounted device is a vehicle-mounted device belonging to a prescribed group; when it is determined that the in-vehicle device that has received the instruction is an in-vehicle device belonging to the predetermined group, outputting, through a speaker or a display unit, at least one of a sound asking whether or not the in-vehicle device belonging to the predetermined group can execute an operation corresponding to the instruction and a consent promoting image prompting consent to execute the operation in the in-vehicle device belonging to the predetermined group; and controlling an operation of the in-vehicle device that receives the instruction when an input indicating that the occupant agrees to execute the instruction is received by the general-purpose switch.
(10): a storage medium according to an aspect of the present invention stores a program that is installed in one or more computers in an in-vehicle device control system including an acquisition unit that acquires a voice including speech content of an occupant of a vehicle and a general-purpose switch, and that causes the computers to execute: identifying a sound comprising speech content of the occupant; determining an in-vehicle device indicating an action by the recognized sound of the occupant; determining whether the determined vehicle-mounted device is a vehicle-mounted device belonging to a prescribed group; when it is determined that the in-vehicle apparatus that has received the instruction is an in-vehicle apparatus belonging to the predetermined group, outputting, through a speaker or a display unit, at least one of a sound asking whether or not the in-vehicle apparatus belonging to the predetermined group can execute an operation corresponding to the instruction and a consent promoting image prompting consent to execute the operation in the in-vehicle apparatus belonging to the predetermined group; and controlling an operation of the in-vehicle device that receives the instruction when an input indicating that the occupant agrees to execute the instruction is received by the general-purpose switch.
Effects of the invention
According to the aspects (1) to (10), the operation of the in-vehicle device can be easily instructed to the occupant while maintaining the safety of the vehicle.
Drawings
Fig. 1 is a diagram showing an example of the configuration of an intelligent system according to a first embodiment.
Fig. 2 is a diagram showing an example of the configuration of the agent device according to the first embodiment.
Fig. 3 is a view showing an example of the interior of the vehicle as viewed from the driver seat.
Fig. 4 is a view showing an example of the vehicle interior of the vehicle M as viewed from above.
Fig. 5 is a diagram showing an example of a consent promoting image of the reclining mechanism of the driver seat.
Fig. 6 is a diagram showing an example of the configuration of the server device according to the first embodiment.
Fig. 7 is a diagram showing an example of the content of the response information.
Fig. 8 is a diagram showing an example of a sequence diagram of a scene in which information indicating an influence on the in-vehicle device control of the in-vehicle device is received.
Fig. 9 is a flowchart showing a flow of a series of processes of the agent device according to the first embodiment.
Fig. 10 is a flowchart showing a flow of processing of an example of the server device according to the first embodiment.
Fig. 11 is a diagram showing an example of the agent device according to the second embodiment.
Fig. 12 is a flowchart showing a flow of a series of processes of the agent device according to the second embodiment.
Description of the symbols:
1 … smart body system, 100A.. smart body device, 102.. smart body side communication section, 106A, 106B, 106C, 106D, 106e.. microphone, 108A, 108B, 108C, 108D, 108e.. speaker, 110A, 110B, 110℃. display section, 112.. general switch, 120A.. smart body side control section, 122.. acquisition section, 124.. voice synthesis section, 126.. output control section, 128.. communication control section, 130.. determination section, 132.. determination section, 134.. in-vehicle device control section, 150A.. smart body side storage section, 152.. in-vehicle device information, 200.. server device, 202.. server side communication section, 210.. server side control section, 212.. speech section, 214a.. speech section, 216. a voice recognition unit, 222a.. an agent data generation unit, 224.. a communication control unit, 230.. a server-side storage unit, 234a.. response information, VE... an in-vehicle device, an nve.. non-influence in-vehicle device, an eve.. influence in-vehicle device, and an m.. vehicle.
Detailed Description
Embodiments of an in-vehicle device control system, an in-vehicle device control apparatus, an in-vehicle device control method, and a storage medium according to the present invention will be described below with reference to the drawings.
< first embodiment >
[ System Structure ]
Fig. 1 is a diagram showing an example of the configuration of an intelligent system 1 according to a first embodiment. The agent system 1 according to the first embodiment includes, for example, an agent device 100 mounted on a vehicle (hereinafter referred to as a vehicle M) and a server device 200. The vehicle M is, for example, a two-wheel, three-wheel, four-wheel, or the like vehicle. The drive source of these vehicles may be an internal combustion engine such as a diesel engine or a gasoline engine, an electric motor, or a combination thereof. The electric motor operates using generated power generated by a generator connected to the internal combustion engine or discharge power of a secondary battery or a fuel cell.
The agent device 100 and the server device 200 are connected to be able to communicate via a network NW. The network NW includes lan (local Area network), wan (wide Area network), and the like. The network NW may include a network using wireless communication, such as Wi-Fi or Bluetooth (registered trademark, hereinafter omitted). The agent system 1 may be configured by a plurality of agent devices 100 and a plurality of server devices 200.
The agent device 100 acquires a voice from the occupant of the vehicle M using the agent function, and transmits the acquired voice to the server device 200. The smart device 100 performs a dialogue with the occupant, provides information such as images and videos, and controls the in-vehicle equipment VE and other devices based on data (e.g., smart data) obtained from the server device. The vehicle M is equipped with, for example, an in-vehicle device VE (hereinafter referred to as an influencing in-vehicle device EVE) whose operation influences the behavior of the vehicle M and an in-vehicle device VE (hereinafter referred to as an non-influencing in-vehicle device NVE) whose operation does not influence the behavior of the vehicle M. The influencing vehicle-mounted device EVE is a device that influences the posture of the driver (e.g., a reclining mechanism of the driver seat, a seat position control mechanism of the driver seat, etc.), a device involved in automatic driving and high driving support (e.g., acc (adaptive Cruise control), vsa (vehicle availability assist), etc.), and the like, and is a device that permits (permits) only the driver to operate. On the other hand, the non-impact vehicle-mounted equipment NVE is, for example, an air conditioner, a power window, a stereo, a car navigation, or the like, and is also equipment that allows an operation by a passenger other than the driver. Further, as a method of classifying the in-vehicle devices VE, for example, there is a method of classifying the in-vehicle devices VE into the in-vehicle devices VE corresponding to the operations permitted (permitted) only by the driver in the vehicle and the other in-vehicle devices VE. As the in-vehicle device corresponding to the operation permitted (permitted) only by the driver, for example, a power window on the driver's seat side or the like is used in addition to the influence on the in-vehicle device EVE.
Server device 200 communicates with agent device 100 mounted on vehicle M, and acquires various data from agent device 100. The server device 200 generates agent data relating to a query by voice or the like based on the acquired data, and provides the generated agent data to the agent device 100. The functions of the server apparatus 200 according to the first embodiment are included in the agent function. In addition, the function of the server apparatus 200 updates the agent function in the agent apparatus 100 to a function with higher accuracy.
[ Structure of Intelligent body device ]
Fig. 2 is a diagram showing an example of the configuration of the agent device 100 according to the first embodiment. The smart device 100 according to the first embodiment includes, for example, a smart body side communication unit 102, a microphone (microphone) 106, a speaker 108, a display unit 110, a first general switch 112, a second general switch 113, a smart body side control unit 120, and a smart body side storage unit 150. These apparatuses and devices can be connected to each other by a multiplex communication line such as a can (controller Area network) communication line, a serial communication line, a wireless communication network, or the like. The configuration of the smart agent device 100 shown in fig. 2 is merely an example, and a part of the configuration may be omitted, or another configuration may be further added.
Smart body side communication unit 102 includes a communication Interface such as a nic (network Interface controller). The smart body side communication unit 102 communicates with the server device 200 and the like via the network NW.
The microphone 106 is a sound input device that converts and receives sound in the vehicle interior into electric signals. The microphone 106 outputs the received voice data (hereinafter, referred to as voice data) to the smart body-side control unit 120. For example, the microphone 106 is provided near the front of the occupant seated in the vehicle compartment. For example, the microphone 106 is disposed near a mat lamp (mat lamp), a steering wheel, an instrument panel, or a seat. The microphone 106 may be provided in plural in the vehicle compartment.
The speaker 108 is provided near a seat in the vehicle cabin or near the display unit 110, for example. The speaker 108 outputs sound based on the information output by the smart body-side control section 120.
The display unit 110 includes a display device such as an lcd (liquid Crystal display) or an organic el (electroluminescence) display. The display unit 110 displays an image based on information output by the smart body side control unit 120.
The first general switch 112 is a user interface such as a button. The first general-purpose switch 112 receives an operation by the occupant, and outputs a signal corresponding to the received operation to the smart body-side control unit 120. The first general switch 112 is provided to a steering wheel, for example. For example, when the first general-purpose switch 112 is used for some applications without assigning a dedicated function, the intelligent device 100 determines the application, and instructs the application by the sound output from the speaker 108 and the image displayed on the display unit 110. Specifically, the first general switch 112 issues "does the power window on the driver's seat side open? With consent, please press the first general switch 112. "etc., indicating its use.
The first general switch 112 may be used for other purposes than receiving an input indicating the approval of the occupant. For example, the first general switch 112 may be used as a switch to accept the start of speech. The first general switch 112 may be used for other purposes than a scene in which a predetermined input related to a voice instruction including an input indicating the approval of the occupant is received. Other applications include, for example, call start of a cellular phone paired with an audio device of the vehicle M, volume adjustment of the audio device, activation/deactivation of the audio device, and on/off of an interior lighting. The first general switch 112 may be configured to emit light, and light or blink to indicate the timing of receiving the input to the occupant at the timing of receiving the input indicating the approval of the occupant or at the timing of being usable for another application. When receiving an input, the first general-purpose switch 112 may show the use of the switch to the occupant by changing the emission color according to the use.
The second general switch 113 is a user interface such as a button. The second general switch 113 receives an operation of the occupant, and outputs a signal corresponding to the received operation to the smart body side control unit 120. For example, when the second general switch 113 is used for some applications without assigning a dedicated function, the intelligent device 100 determines the application, and instructs the application by the sound output from the speaker 108 and the image displayed on the display unit 110. Specifically, the second general switch 113 issues "start air conditioner from the speaker 108? If granted, please press the second universal switch 113. "etc., indicating its use.
Fig. 3 is a view showing an example of the interior of the vehicle as viewed from the driver seat. In the vehicle interior of the illustrated example, microphones 106A to 106C, speakers 108A to 108C, and display units 110A to 110C are provided. The microphone 106A is provided, for example, in a steering wheel, and mainly receives the voice of the driver. The microphone 106B is provided, for example, in an instrument panel (dash panel or garnish) IP on the front surface of the passenger seat, and mainly receives speech of the passenger in the passenger seat. The microphone 106C is provided near the center of the instrument panel (between the driver seat and the passenger seat), for example.
Speaker 108A is provided, for example, at the lower part of the door on the driver's seat side, speaker 108B is provided, for example, at the lower part of the door on the passenger seat side, and speaker 108C is provided, for example, in the vicinity of display 110C, that is, in the vicinity of the center of instrument panel IP.
The Display unit 110A is, for example, a HUD (Head-Up Display) device that displays a virtual image in front of a line of sight when the driver visually recognizes the outside of the vehicle. The HUD device is a device that allows an occupant to visually recognize a virtual image by projecting light to a windshield glass of the vehicle M or a transparent member having light permeability called a combiner, for example. The occupant is mainly the driver, but may be an occupant other than the driver.
The display unit 110B is provided on the instrument panel IP near the front of the driver seat (the seat closest to the steering wheel), and is provided at a position where the occupant can visually confirm from the gap of the steering wheel or visually confirm beyond the steering wheel. The display unit 110B is, for example, an LCD, an organic EL display device, or the like. The display unit 110B displays images of, for example, the speed of the vehicle M, the engine speed, the remaining fuel amount, the radiator water temperature, the travel distance, and other information.
The display unit 110C is provided near the center of the instrument panel IP. The display unit 110C is, for example, an LCD or an organic EL display device, as in the display unit 110B. The display unit 110C displays contents such as television programs and movies.
The first general switch 112 is provided, for example, at a position of the steering wheel that does not interfere with the driving operation (e.g., a position other than the outer periphery of the steering wheel).
In the vehicle M, a microphone and a speaker may be provided near the rear seat. Fig. 4 is a view showing an example of the vehicle interior of the vehicle M as viewed from above. In the vehicle interior, microphones 106D and 106E and speakers 108D and 108E may be provided in addition to the microphones and speakers illustrated in fig. 3.
The microphone 106D is provided, for example, in the vicinity of the rear seat ST3 provided rearward of the passenger seat ST2 (for example, rearward of the passenger seat ST 2), and mainly receives the sound of the speech of the occupant seated in the rear seat ST 3. The microphone 106E is provided, for example, in the vicinity of the rear seat ST4 provided rearward of the driver seat ST1 (for example, behind the driver seat ST 1), and mainly receives the sound of speech of an occupant seated in the rear seat ST 4.
The speaker 108D is provided, for example, below the door on the rear seat ST3 side, and the speaker 108E is provided, for example, below the door on the rear seat ST4 side.
The second general switch 113 is provided in the vicinity of the microphones 106A to 106D, for example.
The vehicle M illustrated in fig. 1 is a vehicle provided with a steering wheel that can be operated by a driver as an occupant as illustrated in fig. 3 or 4, but is not limited thereto. For example, the vehicle M may also be a vehicle without a roof, i.e. without a cabin (or without an explicit distinction thereof). In the example of fig. 3 or 4, the case where the driver sitting on the driver who performs the driving operation on the vehicle M and the passenger seat and the rear seat sitting on the other passenger who does not perform the driving operation are located in one room is described, but the present invention is not limited thereto. For example, the vehicle M may be a saddle-ride type motorcycle having a steering handle in place of a steering wheel. In the example of fig. 3 or 4, the case where the vehicle M is a vehicle having a steering wheel is described, but the present invention is not limited thereto. For example, the vehicle M may be an autonomous vehicle not provided with a driving operation device such as a steering wheel. The autonomous vehicle is a vehicle that performs driving control by controlling one or both of steering and acceleration/deceleration of the vehicle without depending on an operation of an occupant, for example.
Returning to the description of fig. 2, the smart body-side control unit 120 includes, for example, an acquisition unit 122, a voice synthesis unit 124, an output control unit 126, a communication control unit 128, a determination unit 130, a determination unit 132, and an in-vehicle device control unit 134. These components are realized by a processor execution program (software) such as a cpu (central Processing unit) or a gpu (graphics Processing unit). Some or all of these components may be realized by hardware (circuit portion including circuit) such as lsi (large scale integration), asic (application Specific Integrated circuit), FPGA (Field-Programmable Gate Array), or the like, or may be realized by cooperation between software and hardware. The program may be stored in advance in the smart body-side storage unit 150 (a storage device including a non-transitory storage medium), or may be stored in a removable storage medium (a non-transitory storage medium) such as a DVD or a CD-ROM, and the storage medium may be attached to the smart body-side storage unit 150 by being attached to a drive device.
The smart side memory unit 150 is implemented by an HDD, a flash memory, an eeprom (electrically erasable programmable Read Only memory), a rom (Read Only memory), a ram (random access memory), or the like. The smart body side storage unit 150 stores, for example, a program referred to by the processor and the vehicle-mounted device information 152. The in-vehicle device information 152 is information indicating (a list of) the in-vehicle devices VE mounted on the vehicle M, and is information indicating whether the in-vehicle devices VE affect the in-vehicle devices EVE or do not affect the in-vehicle devices NVE.
The acquisition unit 122 acquires audio data from the microphone 106 or acquires other information.
When the data (agent data described later) received by the agent-side communication unit 102 from the server apparatus 200 includes voice control content, the voice synthesis unit 124 generates artificial synthesized voice (hereinafter referred to as agent voice) based on voice data instructed by speech (i.e., voice instruction) as voice control.
When the speech synthesis unit 124 generates the agent speech, the output control unit 126 causes the speaker 108 to output the agent speech. When the agent data includes image control content, the output control unit 126 causes the display unit 110 to display image data instructed for image control. The output control unit 126 may display an image of the recognition result of the audio data (text data such as a sentence) on the display unit 110.
The communication control unit 128 transmits the audio data acquired by the acquisition unit 122 to the server device 200 via the smart phone side communication unit 102.
When the agent data includes information indicating the in-vehicle device control, the specifying unit 130 specifies the in-vehicle device VE that controls the in-vehicle device, based on the in-vehicle device information 152. The specifying unit 130 searches for the in-vehicle device information 152 using, for example, the name of the in-vehicle device VE included in the meaning information as a search keyword, and specifies the in-vehicle device VE.
The determination unit 132 determines whether or not the in-vehicle device VE specified by the specification unit 130 is an influencing in-vehicle device EVE based on the in-vehicle device information 152.
When the determination unit 132 determines that the in-vehicle equipment VE instructed to operate by the in-vehicle equipment control content does not affect the in-vehicle equipment EVE (that is, the non-affecting in-vehicle equipment NVE), the in-vehicle equipment control unit 134 controls the operation of the non-affecting in-vehicle equipment NVE based on the in-vehicle equipment control content. When it is determined that the in-vehicle equipment VE affects the in-vehicle equipment EVE, the in-vehicle equipment control unit 134 executes the control indicated by the in-vehicle equipment control content, and determines whether or not an input indicating the approval of the occupant is received by the first general switch 112. When the input indicating the occupant approval is received by the first general switch 112, the in-vehicle device control unit 134 controls the operation affecting the in-vehicle device EVE based on the in-vehicle device control content.
The vehicle M further includes the second general-purpose switch 113 that is a switch other than the first general-purpose switch 112 in the center of the instrument panel IP, the rear-seat microphone 106D, and the vicinity of the microphone 106E, and when the determination unit 132 determines that the in-vehicle device VE instructed to operate by the in-vehicle device control content is not an in-vehicle device belonging to a predetermined group (that is, is an unaffected-vehicle device NVE or an in-vehicle device corresponding to an operation permitted by a driver other than the vehicle), the in-vehicle device control unit 134 may execute the control content of the unaffected-vehicle device NVE based on the in-vehicle device control content, and may accept the approval of the occupant through the second general-purpose switch 113 provided in the center of the instrument panel IP, the rear-seat microphone 106D, and the vicinity of the microphone 106E. At this time, when the input indicating the approval of the occupant is received by the second general switch 113, the operation of the non-impact vehicle-mounted device NVE (or the vehicle-mounted device corresponding to the operation permitted by the driver other than the vehicle-mounted device) is controlled based on the vehicle-mounted device control content.
Here, when the determination unit 132 determines that the in-vehicle equipment VE instructed to operate by the in-vehicle equipment control content affects the in-vehicle equipment EVE, it is requested whether or not the control indicated by the in-vehicle equipment control content can be executed, and when execution is granted, the sound synthesis unit 124 generates a sound urging operation (for example, pressing) of the first general-purpose switch 112. The output control unit 126 outputs the voice generated by the voice synthesis unit 124 and urging the operation of the first general-purpose switch 112 through the speaker 108. When the determination unit 132 determines that the in-vehicle equipment VE instructed to operate by the in-vehicle equipment control content is the influencing in-vehicle equipment EVE, it is requested whether or not the instruction indicated by the in-vehicle equipment control content can be executed with respect to the influencing in-vehicle equipment EVE, and when it agrees, the output control unit 126 causes the display unit 110 to display an image (hereinafter, referred to as an agreement promotion image) prompting the operation (for example, pressing) of the first general-purpose switch 112.
Fig. 5 is a diagram showing an example of the promotion approval image IM1 of the lying down mechanism of the driver seat (i.e., affecting the in-vehicle device EVE). In the consent-facilitation image IM1, for example, a message MS asking whether or not an instruction (in this case, lying down) shown by the in-vehicle device control content can be executed to the lying mechanism of the driver's seat, and an image (illustrated image IM2) showing a method of indicating the operation of consent to the first general switch 112 are included. The message MS is, for example, "can the driver seat lie down? If you can please press the general switch. "etc.
[ Structure of Server device ]
Fig. 6 is a diagram showing an example of the configuration of the server apparatus 200 according to the first embodiment. The server device 200 according to the first embodiment includes, for example, a server-side communication unit 202, a server-side control unit 210, and a server-side storage unit 230.
The server-side communication unit 202 includes a communication interface such as a NIC. Server-side communication unit 202 communicates with agent devices 100 and the like mounted on each vehicle M via network NW.
The server-side control unit 210 includes, for example, an acquisition unit 212, a speech section extraction unit 214, a voice recognition unit 216, an agent data generation unit 222, and a communication control unit 224. These components are realized by executing a program (software) by a processor such as a CPU or a GPU. Some or all of these components may be realized by hardware (circuit portion including circuit) such as LSI, ASIC, FPGA, or the like, or may be realized by cooperation of software and hardware. The program may be stored in advance in the server-side storage unit 230 (a storage device including a non-transitory storage medium), or may be stored in a removable storage medium (a non-transitory storage medium) such as a DVD or a CD-ROM, and the storage medium may be attached to the server-side storage unit 230 by being attached to the drive device.
The server-side storage unit 230 is implemented by an HDD, a flash memory, an EEPROM, a ROM, a RAM, or the like. The server-side storage unit 230 stores, for example, reply information 234 and the like in addition to a program referred to by the processor.
Fig. 7 is a diagram showing an example of the content of the response information 234. In the response information 234, for example, the content of control executed by the smart body-side control unit 120 is associated with the meaning information. The meaning information is, for example, a meaning recognized by the voice recognition unit 216 from the whole of the content of the speech. The control content includes, for example, in-vehicle device control related to an instruction (control) for an operation of the in-vehicle device VE, audio control for outputting an agent audio, image control for displaying on the display unit 110, and the like. For example, in response information 234, the on-vehicle device control of "start air conditioning," the audio control of "air conditioning is started," the display control of displaying the vehicle interior temperature and the set temperature, and the meaning information of "start air conditioning" are associated with each other. In the case where the in-vehicle equipment control content is content related to the influence on the in-vehicle equipment EVE, since the control cannot be executed if the occupant's consent is not obtained from the first general switch 112, the audio information and the display control are not associated with the meaning information influencing the in-vehicle equipment EVE.
Returning to fig. 6, the acquisition unit 212 acquires audio data from the smart device 100 via the server-side communication unit 202.
The speech section extraction unit 214 extracts a period during which the occupant speaks (hereinafter, referred to as a speech section) from the voice data acquired by the acquisition unit 122. For example, the speech section extraction unit 214 may extract the speech section based on the amplitude of the audio signal included in the audio data by the zero-crossing method. The speech section extraction unit 214 may extract the speech section from the audio data based on a mixture Gaussian distribution model (GMM), or may extract the speech section from the audio data by performing template matching processing with a database in which an audio signal unique to the speech section is templated.
The voice recognition unit 216 recognizes voice data for each of the speech sections extracted by the speech section extraction unit 214, and converts the recognized voice data into text data, thereby generating text data including the content of the speech. For example, the voice recognition unit 216 separates the voice signal of the speaking section into a plurality of frequency bands such as a low frequency band and a high frequency band, and performs fourier transform on each of the classified voice signals to generate a spectrogram. The voice recognition unit 216 obtains a character string from the spectrogram by inputting the generated spectrogram to a recurrent neural network. The recurrent neural network can be learned in advance by using, for example, teacher data in which a known character string corresponding to a learning sound is associated with a spectrogram generated from the learning sound as a teacher label. Then, the voice recognition unit 216 outputs the data of the character string obtained from the recurrent neural network as text data.
The voice recognition unit 216 performs syntax analysis of text data in natural language, divides the text data into morphemes, and recognizes a sentence included in the text data from each morpheme.
The agent data generation unit 222 refers to the meaning information of the answer information 234 based on the meaning of the speech content recognized by the voice recognition unit 216, and acquires control content associated with the corresponding meaning information. When recognizing the meanings of "TURN ON the air conditioner", "please TURN ON the power of the air conditioner", and the like as the recognition result, the agent data generation unit 222 replaces the above meanings with the standard character information "start of the air conditioner", the standard command information "TURN _ AC _ ON", and the like. Thus, even if there is a character fluctuation in the request for the speech content, it is possible to easily obtain the control content that meets the request.
The agent data generation unit 222 generates agent data for executing processing corresponding to the acquired control content (for example, at least one of in-vehicle device control, audio control, and display control).
The communication control unit 224 transmits the agent data generated by the agent data generation unit 222 to the vehicle M via the server-side communication unit 202. Thus, the vehicle M executes control corresponding to the agent data by the agent-side control unit 120.
[ timing chart at the time of reception of information that affects on-vehicle device control of on-vehicle device EVE ]
Fig. 8 is a diagram showing an example of a sequence diagram of a scene in which information indicating that the in-vehicle device control affecting the in-vehicle device EVE is received. In fig. 8, the axes (illustrated axes AX1 to AX4) represent the passage of time, the behavior of the occupant of vehicle M is shown on axis AX1, the operation of speaker 108 is shown on axis AX2, the operation of display unit 110 is shown on axis AX3, and the state of first universal switch 112 is shown on axis AX 4.
First, at time t1 to t2, the occupant makes a speech of "lying down the driver seat" (item EV1 shown in the figure). In response to the occurrence of the event EV1, the acquisition unit 122 acquires the speech sound received by the microphone 106 as sound data, and the communication control unit 128 transmits the sound data acquired by the acquisition unit 122 to the server device 200 via the smart phone side communication unit 102. The voice recognition unit 216 identifies the speech content of the voice data acquired by the acquisition unit 122, recognizes the meaning information of the voice data, and controls the in-vehicle device to "recline the driver's seat". Further, the server apparatus 200 transmits agent data including information indicating on-vehicle device control indicating that the driver is to be laid down, to the agent apparatus 100.
The determination unit 132 receives the agent data from the server device 200, and determines whether or not information indicating the in-vehicle device control included in the agent data is information related to an influence on the in-vehicle device EVE. When the determination unit 132 determines that the in-vehicle equipment VE shown in the in-vehicle equipment control affects the in-vehicle equipment EVE (in this example, the lying down mechanism of the driver seat), the sound synthesis unit 124 generates "can the driver seat lie down? If you can please press the general switch. "etc. At time t3, the output controller 126 outputs the sound (item EV2 shown in the figure) generated by the sound synthesizer 124 through the speaker 108. At time t3, when the determination unit 132 determines that the in-vehicle equipment VE indicated by the in-vehicle equipment control affects the in-vehicle equipment EVE (in this example, the lying down mechanism of the driver's seat), it is requested whether or not the driver's seat can be laid down, and when it agrees, the output control unit 126 causes the display unit 110 to display a promotion approval image (item EV3 shown in the drawing) for promoting the operation of the first general-purpose switch 112. The occupant operates the first general switch 112 while confirming one or both of the sound output through the event EV2 or the event EV3 and the prompt approval image displayed, and agreeing to "lie down the driver seat".
The first general switch 112 inquires from time t3 whether or not the driver can be laid down for a predetermined time (for example, several tens of seconds to several minutes) to set the state as an input acceptance state (item EV4 shown in the figure). The in-vehicle device control unit 134 receives an input indicating approval via the first general-purpose switch 112 within a predetermined time, i.e., at time t4, and instructs the driver seat to control the reclining mechanism of the driver seat.
[ treatment procedure ]
Next, a flow of processing of the smart body system 1 according to the first embodiment will be described with reference to a flowchart. In the following, the process of the agent device 100 and the process of the server device 200 are separately described. The flow of processing described below may be repeatedly executed at predetermined timings. The predetermined timing is, for example, timing at which a specific word (e.g., a wakeup word) for activating the agent device is extracted from the audio data, timing at which selection of a switch for activating the agent device 100 among various switches mounted on the vehicle M is accepted, and the like.
Fig. 9 is a flowchart showing a flow of a series of processes of the agent device 100 according to the first embodiment. First, the acquisition unit 122 of the smart body side controller 120 determines whether or not the voice data of the occupant is collected by the microphone 106 after the wakeup word is recognized or after the switch for activating the smart body device is pressed (step S100). The acquisition unit 122 waits until the acoustic data of the occupant is collected. Next, the communication control unit 128 transmits the audio data to the server apparatus 200 via the smart phone side communication unit 102 (step S102). Next, the communication control unit 128 receives agent data from the server device 200 (step S304).
When the received agent data includes the control content, the specifying unit 130 specifies the in-vehicle device VE to be controlled based on the in-vehicle device information 152 (step S306). The determination unit 132 determines whether or not the in-vehicle equipment VE determined by the determination unit 130 affects the in-vehicle equipment EVE (step S308). When the determination unit 132 determines that the in-vehicle equipment VE controlled by the in-vehicle equipment is not the influencing in-vehicle equipment EVE (i.e., is the non-influencing in-vehicle equipment NVE), the in-vehicle equipment control unit 134 causes the non-influencing in-vehicle equipment NVE (the speaker 108, the display unit 110) to execute control (e.g., audio control, display control) indicated by the smart volume data (step S310).
When the determination unit 132 determines that the in-vehicle equipment VE affects the in-vehicle equipment EVE, the output control unit 126 requests the execution of the occupant consent control by causing the speaker 108 to output the voice data requesting the occupant consent, which is generated by the voice synthesis unit 124, or by causing the display unit 110 to display the promotion consent image (step S312). The in-vehicle device control unit 134 determines whether an input indicating approval is accepted by the first general-purpose switch 112 (step S314). When accepting the approval, the in-vehicle device control unit 134 executes the in-vehicle device control indicated by the agent data for the affected in-vehicle device EVE (step S110). If the input indicating the approval is not received by the first general-purpose switch 112 within the predetermined time, the in-vehicle device control unit 134 ends the process without executing the in-vehicle device control indicated by the agent data (step S316).
Fig. 10 is a flowchart showing a flow of processing of an example of the server apparatus 200 according to the first embodiment. First, the server-side communication unit 202 acquires audio data from the smart device 100 (step S200). Next, the speech section extraction unit 214 extracts a speech section included in the audio data (step S202). Next, the voice recognition unit 216 recognizes the speech content from the voice data in the extracted speech section. Specifically, the voice recognition unit 216 converts the voice data into text data, and finally recognizes the words included in the text data (step S204). The agent data generation unit 222 generates agent data based on the meaning of the entire speech content (step S206). Next, the communication control unit 224 of the server-side control unit 210 transmits the agent data to the agent device 100 via the server-side communication unit 202 (step S208). This completes the processing of the flowchart.
[ Another example of Voice control and display control for urging consent ]
In the above description, when the in-vehicle device control is the control affecting the in-vehicle device EVE, the case where the in-vehicle device control is requested whether or not the in-vehicle device control can be executed and the sound synthesis unit 124 generates the sound prompting the occupant's consent has been described, but the present invention is not limited to this. For example, the answer information 234 may be the following information: as the control content affecting the in-vehicle equipment EVE, a correspondence relationship is established in advance with sound control that inquires whether the in-vehicle equipment control can be executed and urges the consent of the occupant. Similarly, the answer information 234 may be the following information: a correspondence relationship is established in advance with display control of the display facilitation approval image as a control content affecting the in-vehicle apparatus EVE. In this case, the speech synthesis unit 124 and the output control unit 126 execute speech control and display control indicated by the agent data.
According to the smart system 1 of the first embodiment described above, even when the speech content of the user (occupant) involved in the control of the in-vehicle equipment VE is recognized by mistake or when the speech of the user involved in the control of the in-vehicle equipment VE is recognized by mistake, it is possible to easily instruct the occupant to operate the in-vehicle equipment VE while keeping the safety of the vehicle M while suppressing the operation of the in-vehicle equipment VE due to the misrecognition.
< second embodiment >
In the first embodiment described above, the case where the smart device 100 and the server device 200 mounted on the vehicle M are different devices from each other has been described, but the present invention is not limited to this. For example, the components of the server device 200 relating to the agent function may be included in the components of the agent device 100. In this case, the server device 200 may function as a virtual machine that is virtually implemented by the agent-side control unit 120 of the agent device 100. Hereinafter, the agent device 100A including the components of the server device 200 will be described as a second embodiment. In this case, the agent device 100A is an example of an "agent system". In the second embodiment, the same components as those in the first embodiment are denoted by the same reference numerals, and a detailed description thereof is omitted.
Fig. 11 is a diagram showing an example of the agent device 100A according to the second embodiment. The smart device 100A includes, for example, a smart body side communication unit 102, a microphone 106, a speaker 108, a display unit 110, a first general switch 112, a second general switch 113, a smart body side control unit 120A, and a smart body side storage unit 150A. The agent-side control unit 120A includes, for example, an acquisition unit 122, a voice synthesis unit 124, an output control unit 126, a communication control unit 128, a determination unit 132, an in-vehicle device control unit 134, a speech section extraction unit 214A, a voice recognition unit 216A, and an agent data generation unit 222A.
The smart body side storage unit 150A stores, for example, in addition to the program referred to by the processor, in-vehicle device information 152, reply information 234A, and the like. Reply information 234A may be updated by the latest information acquired from server device 200.
[ treatment procedure ]
Fig. 12 is a flowchart showing a flow of a series of processes of the agent device 100A according to the second embodiment. The flow of the processing described below may be repeatedly executed at predetermined timing, as in the flow of the processing of the first embodiment. First, the acquisition unit 122 of the smart body-side controller 120 determines whether or not the voice data of the occupant is collected by the microphone 106 (step S400). The acquisition unit 122 waits until the acoustic data of the occupant is collected. Next, the speech section extraction unit 214 extracts a speech section included in the audio data (step S402). Next, the voice recognition unit 216 recognizes the speech content from the voice data in the extracted speech section. Specifically, the voice data is converted into text data, and finally, the sentence included in the text data is recognized (step S404). The agent data generation unit 222 generates agent data based on the meaning of the entire speech content (step S406).
When the received agent data includes the control content, the specifying unit 130 specifies the in-vehicle device VE that performs the control based on the in-vehicle device information 152 (step S408). The determination unit 132 determines whether or not the in-vehicle equipment VE specified by the specification unit 130 affects the in-vehicle equipment EVE (step S410). When the determination unit 132 determines that the in-vehicle equipment VE controlled by the in-vehicle equipment is not the influencing in-vehicle equipment EVE (that is, the non-influencing in-vehicle equipment NVE), the in-vehicle equipment control unit 134 causes the non-influencing in-vehicle equipment NVE (the speaker 108, the display unit 110, and the like) to execute control (for example, audio control and display control) indicated by the smart volume data (step S412).
When the determination unit 132 determines that the in-vehicle equipment VE controlled by the in-vehicle equipment is the vehicle-mounted equipment EVE affected, the output control unit 126 causes the speaker 108 to output the voice data generated by the voice synthesis unit 124 and requesting the execution of the control over the vehicle-mounted equipment EVE, and causes the display unit 110 to display the promotion consent image, thereby requesting the execution of the occupant consent control (step S414). The in-vehicle device control unit 134 determines whether an input indicating approval is accepted by the first general-purpose switch 112 (step S416). When accepting the approval, the in-vehicle device control unit 134 causes the influencing in-vehicle device EVE to execute the in-vehicle device control indicated by the agent data (step S412). If the input indicating the approval is not received by the first general-purpose switch 112 within the predetermined time, the in-vehicle device control unit 134 ends the process without executing the in-vehicle device control indicated by the agent data (step S418).
According to the smart agent device 100A of the second embodiment described above, in addition to the same effects as those of the first embodiment, it is not necessary to perform communication with the server device 200 via the network NW every time the voice from the occupant is acquired, and therefore the speech content can be recognized more quickly. Further, even in a state where the vehicle M cannot communicate with the server device 200, it is possible to generate agent data and provide information to the occupant.
While the present invention has been described with reference to the embodiments, the present invention is not limited to the embodiments, and various modifications and substitutions can be made without departing from the scope of the present invention.
For example, in the above-described embodiment, the case where the vehicle is a four-wheeled motor vehicle has been described as an example, but the present invention is not limited thereto. For example, the vehicle may be another vehicle such as a motorcycle or a transport truck. The vehicle may be a rental car, a shared car, or the like. In this case, the smart device 100 may be disposed on a plurality of rental cars, rental bicycles, shared cars, or the like, for example. In this case, the smart device 100 interacts with the occupant, so that the occupant can easily perform the operation by voice even when the occupant is first seated in the vehicle on which the smart device 100 is mounted or even when the occupant is not familiar with the operation. Further, since the smart agent apparatus 100 can request the operation permitted by the other passenger other than the driver to the other passenger, the burden on the driver can be reduced.

Claims (10)

1. An in-vehicle apparatus control system, wherein,
the vehicle-mounted device control system includes:
an acquisition unit that acquires a sound including speech content of an occupant riding in a vehicle;
an in-vehicle device control unit mounted on the vehicle and controlling an operation of an in-vehicle device including a speaker and a display unit;
a voice recognition unit that recognizes a voice including the speech content of the occupant of the vehicle acquired by the acquisition unit;
a determination section that determines the in-vehicle apparatus that indicates an action by the voice of the occupant recognized by the voice recognition section;
a determination unit that determines whether or not the specified in-vehicle device belongs to a predetermined group; and
a general-purpose switch is arranged on the base,
when the determination unit determines that the in-vehicle device that received the instruction is the in-vehicle device belonging to the predetermined group, the in-vehicle device control unit outputs at least one of a sound for inquiring whether or not the in-vehicle device belonging to the predetermined group can execute the operation corresponding to the instruction and a promotion approval image for prompting approval to execute the operation in the in-vehicle device belonging to the predetermined group through the speaker or the display unit, and controls the operation of the in-vehicle device that received the instruction when the input indicating approval to execute the instruction by the occupant is received through the general-purpose switch.
2. The in-vehicle device control system according to claim 1,
the in-vehicle device belonging to the prescribed group is an in-vehicle device that affects the behavior of the vehicle.
3. The in-vehicle device control system according to claim 1,
the in-vehicle device belonging to the predetermined group is an in-vehicle device corresponding to an operation permitted only by a driver in the vehicle.
4. The in-vehicle device control system according to claim 1,
the general-purpose switch is a switch that can be used for other purposes than a scene in which a predetermined input related to a voice instruction including an input indicating the approval is accepted.
5. The in-vehicle device control system according to claim 4,
when an input indicating that the reception of the voice is started is received via the general-purpose switch, the voice recognition unit starts recognizing the voice including the speech content of the occupant collected by the microphone as the acquisition unit.
6. The in-vehicle device control system according to claim 1 or 5,
the universal switch is arranged on the steering wheel.
7. The in-vehicle device control system according to any one of claims 1 to 6,
the in-vehicle device control system may further include a switch that causes the in-vehicle device control unit to control an operation of the in-vehicle device belonging to the group other than the predetermined group when it is determined that the in-vehicle device that received the instruction is an in-vehicle device other than the in-vehicle device belonging to the predetermined group.
8. An in-vehicle device control apparatus, wherein,
the vehicle-mounted device control device is provided with:
an acquisition unit that acquires a sound including speech content of an occupant riding in a vehicle;
an in-vehicle device control unit mounted on the vehicle and controlling an operation of an in-vehicle device including a speaker and a display unit;
a voice recognition unit that recognizes a voice including the speech content of the occupant acquired by the acquisition unit;
a determination section that determines the in-vehicle apparatus that indicates an action by the voice of the occupant recognized by the voice recognition section;
a determination unit that determines whether or not the specified in-vehicle device belongs to a predetermined group; and
a general-purpose switch is arranged on the base,
when the determination unit determines that the in-vehicle device that received the instruction is the in-vehicle device belonging to the predetermined group, the in-vehicle device control unit outputs at least one of a sound for inquiring whether or not the in-vehicle device belonging to the predetermined group can execute the operation corresponding to the instruction and a promotion approval image for prompting approval to execute the operation in the in-vehicle device belonging to the predetermined group through the speaker or the display unit, and controls the operation of the in-vehicle device that received the instruction when the input indicating approval to execute the instruction is received through the general-purpose switch.
9. A control method for an in-vehicle apparatus, wherein,
the vehicle-mounted device control method causes a single or a plurality of computers in a vehicle-mounted device control system provided with an acquisition unit for acquiring a voice including speech content of an occupant of a vehicle and a general-purpose switch to execute the steps of:
identifying a sound comprising speech content of the occupant;
determining an in-vehicle device indicating an action by the recognized sound of the occupant;
determining whether the determined vehicle-mounted device is a vehicle-mounted device belonging to a prescribed group;
when it is determined that the in-vehicle apparatus that has received the instruction is an in-vehicle apparatus belonging to the predetermined group, outputting, through a speaker or a display unit, at least one of a sound asking whether or not the in-vehicle apparatus belonging to the predetermined group can execute an operation corresponding to the instruction and a consent promoting image prompting consent to execute the operation in the in-vehicle apparatus belonging to the predetermined group; and
in this case, when an input indicating that the occupant has approved execution of the instruction is received via the general-purpose switch, the operation of the in-vehicle device that has received the instruction is controlled.
10. A storage medium, wherein,
the storage medium stores a program that is installed in a single or a plurality of computers in an in-vehicle device control system provided with an acquisition unit that acquires a sound including speech content of an occupant of a vehicle and a general-purpose switch, and that causes the computers to execute:
identifying a sound comprising speech content of the occupant;
determining an in-vehicle device indicating an action by the recognized sound of the occupant;
determining whether the determined vehicle-mounted device is a vehicle-mounted device belonging to a prescribed group;
when it is determined that the in-vehicle apparatus that has received the instruction is an in-vehicle apparatus belonging to the predetermined group, outputting, through a speaker or a display unit, at least one of a sound asking whether or not the in-vehicle apparatus belonging to the predetermined group can execute an operation corresponding to the instruction and a consent promoting image prompting consent to execute the operation in the in-vehicle apparatus belonging to the predetermined group; and
in this case, when an input indicating that the occupant has approved execution of the instruction is received via the general-purpose switch, the operation of the in-vehicle device that has received the instruction is controlled.
CN202010189106.6A 2019-03-19 2020-03-17 Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium Active CN111724798B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2019051674A JP7261626B2 (en) 2019-03-19 2019-03-19 VEHICLE EQUIPMENT CONTROL SYSTEM, VEHICLE EQUIPMENT CONTROL DEVICE, VEHICLE EQUIPMENT CONTROL METHOD, AND PROGRAM
JP2019-051674 2019-03-19

Publications (2)

Publication Number Publication Date
CN111724798A true CN111724798A (en) 2020-09-29
CN111724798B CN111724798B (en) 2024-05-07

Family

ID=72558861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010189106.6A Active CN111724798B (en) 2019-03-19 2020-03-17 Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium

Country Status (2)

Country Link
JP (1) JP7261626B2 (en)
CN (1) CN111724798B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114954168A (en) * 2021-08-05 2022-08-30 长城汽车股份有限公司 Method and device for ventilating and heating seat, storage medium and vehicle
WO2024053182A1 (en) * 2022-09-05 2024-03-14 日産自動車株式会社 Voice recognition method and voice recognition device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163091A (en) * 1998-11-27 2000-06-16 Denso Corp Speech recognition device
US20020128762A1 (en) * 2000-06-29 2002-09-12 Jatco Transtechnology Ltd. Vehicle control device
JP2003345389A (en) * 2002-05-22 2003-12-03 Nissan Motor Co Ltd Voice recognition device
JP2005153671A (en) * 2003-11-25 2005-06-16 Nissan Motor Co Ltd Display operating device for vehicle
CN101133439A (en) * 2005-11-07 2008-02-27 松下电器产业株式会社 Display device and navigation device
JP2016161754A (en) * 2015-03-02 2016-09-05 クラリオン株式会社 Vehicle-mounted device
CN109298830A (en) * 2018-11-05 2019-02-01 广州小鹏汽车科技有限公司 The vehicle-mounted middle control large-size screen monitors touch control method of one kind, device and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000163091A (en) * 1998-11-27 2000-06-16 Denso Corp Speech recognition device
US20020128762A1 (en) * 2000-06-29 2002-09-12 Jatco Transtechnology Ltd. Vehicle control device
JP2003345389A (en) * 2002-05-22 2003-12-03 Nissan Motor Co Ltd Voice recognition device
JP2005153671A (en) * 2003-11-25 2005-06-16 Nissan Motor Co Ltd Display operating device for vehicle
CN101133439A (en) * 2005-11-07 2008-02-27 松下电器产业株式会社 Display device and navigation device
JP2016161754A (en) * 2015-03-02 2016-09-05 クラリオン株式会社 Vehicle-mounted device
CN109298830A (en) * 2018-11-05 2019-02-01 广州小鹏汽车科技有限公司 The vehicle-mounted middle control large-size screen monitors touch control method of one kind, device and computer readable storage medium

Also Published As

Publication number Publication date
JP2020154098A (en) 2020-09-24
CN111724798B (en) 2024-05-07
JP7261626B2 (en) 2023-04-20

Similar Documents

Publication Publication Date Title
US10170111B2 (en) Adaptive infotainment system based on vehicle surrounding and driver mood and/or behavior
CN106663422B (en) Speech recognition system and speech recognition method thereof
KR101736109B1 (en) Speech recognition apparatus, vehicle having the same, and method for controlling thereof
JP7250547B2 (en) Agent system, information processing device, information processing method, and program
CN112805182A (en) Agent device, agent control method, and program
CN111724798B (en) Vehicle-mounted device control system, vehicle-mounted device control apparatus, vehicle-mounted device control method, and storage medium
JP2020061642A (en) Agent system, agent control method, and program
CN111007968A (en) Agent device, agent presentation method, and storage medium
US11325605B2 (en) Information providing device, information providing method, and storage medium
JP2020060861A (en) Agent system, agent method, and program
JP7239359B2 (en) AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM
US20220185111A1 (en) Voice control of vehicle systems
JP2020144285A (en) Agent system, information processing device, control method for mobile body mounted apparatus, and program
CN111731320B (en) Intelligent body system, intelligent body server, control method thereof and storage medium
CN112908320B (en) Agent device, agent method, and storage medium
JP7280066B2 (en) AGENT DEVICE, CONTROL METHOD OF AGENT DEVICE, AND PROGRAM
JP7252029B2 (en) SERVER DEVICE, INFORMATION PROVISION METHOD, AND PROGRAM
JP7254689B2 (en) Agent system, agent method and program
CN110843790A (en) Method, device and equipment for cooperative control of hardware in vehicle
JP2020060623A (en) Agent system, agent method, and program
WO2023153314A1 (en) In-vehicle equipment control device and in-vehicle equipment control method
CN111559317B (en) Agent device, method for controlling agent device, and storage medium
KR20230039799A (en) Vehicle and method for controlling thereof
CN114475425A (en) Driver assistance system and vehicle having the same
CN117995185A (en) Method, device and system for interaction between interior and exterior of vehicle and vehicle-mounted controller

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant