CN107303909B - Voice call-up method, device and equipment - Google Patents

Voice call-up method, device and equipment Download PDF

Info

Publication number
CN107303909B
CN107303909B CN201610246576.5A CN201610246576A CN107303909B CN 107303909 B CN107303909 B CN 107303909B CN 201610246576 A CN201610246576 A CN 201610246576A CN 107303909 B CN107303909 B CN 107303909B
Authority
CN
China
Prior art keywords
voice
user
vehicle
information
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610246576.5A
Other languages
Chinese (zh)
Other versions
CN107303909A (en
Inventor
郭云云
蔡丽娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Network Technology Co Ltd
Original Assignee
Zebra Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebra Network Technology Co Ltd filed Critical Zebra Network Technology Co Ltd
Priority to CN201610246576.5A priority Critical patent/CN107303909B/en
Priority to PCT/CN2017/080387 priority patent/WO2017181901A1/en
Priority to TW106112807A priority patent/TW201742424A/en
Publication of CN107303909A publication Critical patent/CN107303909A/en
Application granted granted Critical
Publication of CN107303909B publication Critical patent/CN107303909B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R16/00Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
    • B60R16/02Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
    • B60R16/023Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements for transmission of signals between vehicle parts or subsystems
    • B60R16/0231Circuits relating to the driving or the functioning of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention

Abstract

The application provides a voice arousing method, a voice arousing device and voice arousing equipment. The method comprises the following steps: acquiring a voice interaction scene; determining corresponding first voice according to the voice interaction scene, and broadcasting the first voice to a user, wherein the first voice is voice prompting the user to interact with a vehicle or equipment on the vehicle; the operation information input by the user according to the first voice is received, corresponding operation is executed according to the operation information, so that the interaction behavior between the user and the user is guaranteed, rich voice content is provided for the user, the user can conveniently interact with the vehicle, the user does not need to manually trigger the voice interaction function of the vehicle, the voice function of the vehicle can be actively triggered according to different driving states of the vehicle, and the intelligence of man-machine interaction and the applicability of voice triggering of the vehicle are improved.

Description

Voice call-up method, device and equipment
Technical Field
The present application relates to internet technologies, and in particular, to a method, an apparatus, and a device for voice call applied to a vehicle.
Background
With the continuous development of social economy, a family of vehicles is more and more, and the requirements of people on the vehicles are higher and higher. In order to meet the requirements of users, large vehicle manufacturers invest large research and development cost in vehicle intellectualization, so that the vehicles can be more convenient for people to live.
At present, voice call in a vehicle is a common interactive behavior between the vehicle and a user, for example, when the user has an incoming call, the user can answer the call by calling a vehicle-mounted voice function, and a driver does not need to hold a mobile phone by hand, so that the problems of dangerous driving and the like are avoided.
However, the current voice call needs to be manually called by a user, and the targeted object is single and low in applicability, so that the man-machine interaction is not intelligent enough.
Disclosure of Invention
The application provides a voice call-out method, a voice call-out device and voice call-out equipment, and aims to solve the technical problems that voice call-out in the prior art needs to be manually called out by a user, a targeted object is single, applicability is not high, and man-machine interaction is not intelligent enough.
In one aspect, the present application provides a voice evoking method, comprising:
acquiring a voice interaction scene;
determining corresponding first voice according to the voice interaction scene, and broadcasting the first voice to a user, wherein the first voice is voice prompting the user to interact with the vehicle or equipment on the vehicle;
and receiving operation information input by a user according to the first voice, and executing corresponding operation according to the operation information.
According to the voice evoking method, the voice interaction scene is obtained, the voice function of the vehicle is automatically evocated through the voice interaction scene, and the first voice corresponding to the current voice interaction scene is broadcast for the user, so that the user is prompted to interact with the vehicle or equipment on the vehicle, and then the system of the vehicle is enabled to execute corresponding operation according to the operation information after receiving the operation information input by the user according to the first voice, so that the interaction behavior between the system and the user is ensured, and the interaction requirement of the user is met. According to the method, when a voice interaction scene is acquired, voice is automatically called for a user, rich voice content is provided for the user, the user can conveniently interact with a vehicle, the user does not need to manually trigger the voice interaction function of the vehicle, and the intelligence of man-machine interaction is improved; in addition, the voice interaction scene obtained by the vehicle-mounted system can correspond to the driving state of the vehicle, and the driving state of the vehicle is rich, so that the voice interaction scene corresponding to the driving state of the vehicle is rich, the voice function of the vehicle can be actively called according to different driving states of the vehicle, and the applicability of voice calling of the vehicle is improved.
As an implementation manner, the receiving operation information input by the user according to the first voice, and executing a corresponding operation according to the operation information specifically includes:
receiving a user operation instruction input by a user according to the first voice;
judging whether the user operation is the user operation executed by the user indicated by the first voice according to the user operation instruction;
if so, stopping broadcasting the first voice.
According to the method, the user operation required to be executed by the user is prompted to the user through the first voice, and then after the user executes the user operation according to the indication of the first voice, whether the user operation is the user operation executed by the user indicated by the first voice is judged according to the user operation instruction input by the user, and when the system of the vehicle detects that the user operation executed by the user is the user operation executed by the user indicated by the first voice, the playing of the first voice is stopped, so that the user can visually know the requirement of the vehicle, the user can timely find the abnormity of the vehicle or the improper operation of the user, and the user experience is further improved.
As an implementation manner, the receiving operation information input by the user according to the first voice, and executing a corresponding operation according to the operation information specifically includes:
receiving a second voice input by a user according to the content prompted by the first voice, wherein the first voice is used for prompting the user to perform selection operation aiming at the content prompted by the first voice, and the second voice is a selection result of the user;
and executing corresponding operation according to the second voice.
In the method, the user is instructed to select and operate the content prompted by the first voice through the first voice, so that second voice input by the user according to the first voice is obtained, corresponding operation is executed according to the second voice, and man-machine interaction is enabled to be more intelligent; in addition, the method provided by the embodiment of the invention can acquire the potential voice triggering requirement of the user in advance, actively call the voice for the user, and prompt the event to be handled of the user, so that the user is prevented from forgetting the event to be handled, convenience is provided for the user, and the user experience is further improved.
As an implementation manner, the acquiring a voice interaction scene specifically includes:
acquiring parameter information related to the running of the vehicle; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
Optionally, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information. Optionally, as an implementable manner, the vehicle warning tone information includes a type of the vehicle warning tone or a number of announcements of the vehicle warning tone.
According to the method, the parameter information related to the running of the vehicle is collected, and the voice interaction scene corresponding to the current running state of the vehicle is determined according to the parameter information, so that the voice function of the vehicle is actively called according to the voice interaction scene, the voice interaction scene is interacted with the user, namely the diversity of the voice interaction scene is ensured through the diversity of the parameter information, the mode of actively calling the voice function of the vehicle is enriched, and the applicability of the voice calling of the vehicle is improved.
As an implementable manner, the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
sending an obtaining instruction to equipment for storing the user behavior information, wherein the obtaining instruction carries an authorization code preset by the user or the identifier of the vehicle;
and receiving user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
In the method, the acquisition instruction carrying the authorization code preset by the user or the identification of the vehicle is sent to the device for storing the user behavior information through the system (such as a vehicle-mounted system) of the vehicle, so that the device for storing the user behavior information sends the user behavior information to the system of the vehicle after verifying that the authorization code or the identification of the vehicle is legal, and the privacy and the safety of the user behavior information are ensured.
As an implementable manner, the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
user behavior information input by a user is received.
Optionally, the user behavior information includes a user to-do event.
As an implementable manner, if the event to be handled by the user includes the occurrence time of the event to be handled by the user, the broadcasting the first voice to the user specifically includes:
determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user;
and when the moment arrives, broadcasting the first voice to the user.
According to the method, the time for broadcasting the first voice is determined, the first voice is broadcasted to the user when the time arrives, the potential voice triggering requirement of the user can be obtained in advance, the voice is actively called for the user, the event to be handled is prompted to the user, the user is prevented from forgetting the event to be handled, convenience is provided for the user, and the user experience is further improved.
In another aspect, the present application provides a voice evoking device, including:
the acquisition module is used for acquiring a voice interaction scene;
the determining module is used for determining corresponding first voice according to the voice interaction scene acquired by the acquiring module, wherein the first voice is voice prompting a user to interact with the vehicle or equipment on the vehicle;
the voice broadcasting module is used for broadcasting the first voice determined by the determining module to a user;
the receiving module is used for receiving the operation information of the first voice input determined by the user according to the determining module;
and the processing module is used for executing corresponding operation according to the operation information received by the receiving module.
As an implementable manner, the receiving module is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processing module is specifically configured to determine, according to the user operation instruction, whether the user operation is the user operation performed by the user indicated by the first voice, and instruct the voice broadcasting module to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
As an implementable manner, the receiving module is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, where the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processing module is specifically configured to execute a corresponding operation according to the second voice.
As an implementable manner, the obtaining module includes:
the acquisition submodule is used for acquiring parameter information related to vehicle running; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and the determining submodule is used for determining a voice interaction scene corresponding to the current running state of the vehicle according to the parameter information.
As an implementable manner, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information.
As an implementable manner, the vehicle warning tone information includes a type of the vehicle warning tone or a number of announcements of the vehicle warning tone.
As an implementable manner, the parameter information includes user behavior information, and the obtaining sub-module specifically includes:
a sending unit, configured to send an obtaining instruction to a device that stores the user behavior information, where the obtaining instruction carries an authorization code preset by the user or an identifier of the vehicle;
and the receiving unit is used for receiving the user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
As an implementation manner, the parameter information includes user behavior information, and the obtaining sub-module is specifically configured to receive user behavior information input by a user.
As one implementable manner, the user behavior information includes a user to-do event.
As an implementable manner, the user to-do event includes an occurrence time of the user to-do event; the voice broadcast module includes:
the determining unit is used for determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled of the user;
and the broadcasting unit is used for broadcasting the first voice to the user when the time determined by the determining unit arrives.
The beneficial effects of the voice awakening device provided by each of the above-mentioned realizable manners can refer to the beneficial effects brought by the voice awakening method in each of the above-mentioned realizable manners, and are not described herein again.
In another aspect, the present application provides a voice evoking device, comprising:
the processor is used for acquiring a voice interaction scene and determining a corresponding first voice according to the voice interaction scene;
an output device, coupled to the processor, for broadcasting the first voice to a user, the first voice being a voice prompting the user to interact with the vehicle or a device on the vehicle;
the input device is coupled to the processor and used for receiving operation information input by a user according to the first voice;
the processor is further configured to execute a corresponding operation according to the operation information obtained by the input device.
As an implementable manner, the input device is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processor is specifically configured to determine, according to the user operation instruction, whether the user operation is a user operation performed by a user indicated by the first voice, and instruct the output device to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
As an implementable manner, the input device is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, where the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processor is specifically configured to execute a corresponding operation according to the second voice.
As an implementable manner, the input device is further used for acquiring parameter information related to vehicle driving; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
the processor is specifically configured to determine a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
As an implementable manner, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information.
As an implementable manner, the vehicle warning tone information includes a type of the vehicle warning tone or a number of announcements of the vehicle warning tone.
As an implementable manner, the parameter information includes user behavior information;
the output device is further configured to send an acquisition instruction to a device storing the user behavior information, where the acquisition instruction carries an authorization code preset by the user or an identifier of the vehicle;
the input device is specifically configured to receive user behavior information sent by the device after verifying that the authorization code or the identifier of the vehicle is legitimate.
In an implementation manner, the parameter information includes user behavior information, and the input device is specifically configured to receive user behavior information input by a user.
As one implementable manner, the user behavior information includes a user to-do event.
As an implementable manner, the user to-do event includes an occurrence time of the user to-do event;
the processor is further configured to determine a time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user;
and the output equipment is specifically used for broadcasting the first voice to the user when the time arrives.
The beneficial effects of the voice evoking device provided by the above-mentioned implementation manners can refer to the beneficial effects brought by the voice evoking methods in the above-mentioned implementation manners, and are not described herein again.
In another aspect, the present application provides a voice arousing apparatus for a vehicle, comprising: the system comprises an airborne processor, an airborne output device and an airborne input device;
the onboard processor is used for acquiring a voice interaction scene and determining corresponding first voice according to the voice interaction scene;
the onboard output device is coupled to the onboard processor and is used for broadcasting the first voice to a user, wherein the first voice is voice prompting the user to interact with the vehicle or a device on the vehicle;
the onboard input device is coupled to the onboard processor and used for receiving operation information input by a user according to the first voice;
and the onboard processor is also used for executing corresponding operation according to the operation information obtained by the onboard input equipment.
In another aspect, the present application provides a vehicle-mounted internet operating system, including:
the voice control unit is used for determining corresponding first voice according to the acquired voice interaction scene and broadcasting the first voice to the user;
the operation control unit is used for controlling the voice call-up system to execute corresponding operation according to the operation information acquired by the vehicle-mounted input equipment; and the operation information is input to the vehicle-mounted input equipment by the user according to the first voice.
In the method, a voice interaction scene corresponding to the current driving state of the vehicle is obtained, and a voice function of the vehicle is automatically called through the voice interaction scene to report a first voice corresponding to the current voice interaction scene for a user, so that the user is prompted to interact with the vehicle or equipment on the vehicle, and after receiving operation information input by the user according to the first voice, a system of the vehicle executes corresponding operation according to the operation information, so that interaction behaviors between the system and the user are guaranteed, and interaction requirements of the user are met. According to the method and the device, when a voice interaction scene is acquired, voice is automatically called for the user, rich voice content is provided for the user, the user can conveniently interact with a vehicle, the user does not need to manually trigger the voice interaction function of the vehicle, and the intelligence of man-machine interaction is improved; in addition, the acquired voice interaction scene can correspond to the driving state of the vehicle, and the driving state of the vehicle is rich, so that the voice interaction scene corresponding to the driving state of the vehicle is rich, the voice function of the vehicle can be actively called according to different driving states of the vehicle, and the applicability of voice calling of the vehicle is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of an alternative networking approach of the present application;
fig. 2 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 4 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 7 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
FIG. 8 is a flowchart illustrating a voice evoking method according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a voice call-out device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a voice call-out device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a voice call-out device according to an embodiment of the present application;
FIG. 12 is a hardware diagram of a voice-activated device according to an embodiment of the present application;
FIG. 13 is a schematic structural diagram of an in-vehicle system according to an embodiment of the present disclosure;
fig. 14 is a schematic structural diagram of an in-vehicle internet operating system according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The vehicles according to the embodiments of the present application include, but are not limited to, internal combustion engine automobiles or motorcycles, electric power-assisted vehicles, electric balance cars, remote control vehicles, and other vehicles, small aircraft (e.g., unmanned aircraft, manned small aircraft, remote control aircraft), and various modifications. The vehicle referred to herein may be a single oil-circuit vehicle, a single steam-circuit vehicle, a vehicle combining oil and gas, or an electric vehicle with power assistance, and the embodiment of the present application is not limited to the type of vehicle, and the vehicle has a corresponding on-board system. The following embodiments are all described by taking the case where the vehicle is a vehicle.
The voice call-up method, device and equipment according to the embodiment of the present application may be applied to the networking mode shown in fig. 1, and the network architecture in fig. 1 may include an in-vehicle system and a wireless network of a vehicle. The vehicle-mounted system may be provided with a user operation interface, and optionally, the user operation interface may be a voice interface for a user to input, and may also be an interface for receiving an operation instruction manually triggered by the user, such as a USB flash disk interface, a USB interface, a seat belt insertion port, and the like. The vehicle can be connected with a Wireless network, optionally, the Wireless network can be a 2G network, a 3G network, a 4G network or a 5G network, a Wireless Fidelity (WIFI for short), and the like, optionally, the Wireless network can also be an internet of things or an internet of vehicles. The in-vehicle system may access different network servers, such as a mailbox server, a short message server, a cloud server, and the like, through the wireless network, and only three servers are shown in fig. 1, which is not limited thereto.
Optionally, the execution subject of the method according to the embodiment of the present application may be an in-vehicle system, and optionally, the in-vehicle system may be a system integrated with a vehicle machine on a vehicle, such as an in-vehicle navigation system and/or an in-vehicle entertainment system, and may also be a system including the vehicle machine and other devices of the vehicle, such as sensors, and the in-vehicle system may interact with the vehicle and a user. The embodiment of the application does not limit the specific content of the vehicle-mounted system, as long as the vehicle-mounted system can actively call the voice function of the vehicle. In the following embodiments, the execution subject is an in-vehicle system as an example, but the execution subject in the embodiments of the present application is not limited thereto.
The voice call-out method, the voice call-out device and the voice call-out equipment aim at solving the technical problems that voice call-out in the prior art needs manual call-out by a user, a targeted object is single, applicability is not high, and human-computer interaction is not intelligent enough.
The technical solution of the present application will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a flowchart illustrating a voice evoking method according to an embodiment of the present application. The embodiment relates to a method for actively broadcasting a first voice corresponding to a current voice interaction scene to a user by a vehicle-mounted system according to the determined voice interaction scene, and executing corresponding operation after receiving operation information input by the user according to content prompted by the first voice, so as to ensure active interaction with the user and provide a convenient and specific process for the user. As shown in fig. 2, the method may include:
s101: and acquiring a voice interaction scene.
Specifically, for example, the vehicle is a vehicle, and the voice interaction scene related to the embodiment of the application may be a scene in which a vehicle-mounted system can be actively triggered to call a voice function to broadcast voice for a user, and interact with the user. Optionally, the voice interaction scenario may be a scenario in which the in-vehicle system is capable of actively prompting the user to perform a corresponding operation, for example, a voice scenario in which the user is actively prompted to the vehicle abnormality information (for example, the vehicle trunk is not closed) by voice so that the user eliminates the abnormality of the vehicle. Optionally, the voice interaction scenario may also be a scenario in which the vehicle-mounted system finds a potential voice interaction demand of the user by learning or acquiring some parameter information related to vehicle driving (for example, the vehicle often passes through a certain geographical location), and actively initiates a prompt or query to the user and performs voice interaction with the user, for example, the user generally starts at 8 o 'clock from home, and goes to a convenience store to buy an early morning in half a way at 8 o' clock, and the vehicle-mounted system may learn the location information related to vehicle driving, and when the user starts at 8 o 'clock, actively query whether the user goes to the convenience store to buy an early morning in half a way at 8 o' clock, and then actively navigate the convenience store for the user according to the voice response of the user or actively play soothing music for the user, and the like. For another example, the vehicle-mounted system may log in a mailbox server of the user to obtain some mails of the user under the condition of user authorization, and if the user drives to a certain place at a certain time exists in the mails, the vehicle-mounted system may actively initiate a voice query for the user after the user gets on the vehicle, that is, query whether the user wants to drive to the place for a meeting, so as to execute corresponding operations according to the obtained voice answer of the user. According to the technical scheme, the voice interaction function of the vehicle is not required to be called by the user through manual contact, but the voice is automatically called for the user through different voice interaction scenes, so that rich voice content is provided for the user, the user can conveniently interact with the vehicle, and the intelligence of man-machine interaction is improved.
Optionally, the voice interaction scenario in the embodiment of the present application may be determined in any manner, for example, the voice interaction scenario may be determined by a driving state of a vehicle. The driving state of the vehicle may be information of a surrounding environment of the vehicle, a vehicle speed of the vehicle, and the like, and may also be whether a user operation exists inside the vehicle, whether the vehicle itself has unsafe factors, and the like. Optionally, different vehicle driving states may correspond to different voice interaction scenes, and no matter which vehicle driving state is, as long as the vehicle-mounted system can acquire the voice interaction scene corresponding to the vehicle driving state, the voice function of the vehicle can be actively invoked, that is, the voice interaction scenes for automatically invoking the vehicle voice function in the embodiment of the present application are rich, and the applicability is high. Of course, the determination of the voice interaction scenario according to the driving state of the vehicle is only an example, and the embodiment of the present application is not limited thereto.
S102: and determining corresponding first voice according to the voice interaction scene, and broadcasting the first voice to a user, wherein the first voice is voice for prompting the user to interact with a vehicle or equipment on the vehicle.
Specifically, after the vehicle-mounted system acquires the voice interaction scene, a first voice corresponding to the voice interaction scene can be determined. Optionally, the vehicle-mounted system may pre-establish a mapping relationship between different voice interaction scenes and different broadcast voices, and once the vehicle-mounted system determines the voice interaction scene corresponding to the current driving state, the vehicle-mounted system may actively broadcast the first voice to the user. Optionally, the voice interaction scene may be in an information form, so that the vehicle-mounted system may also perform keyword recognition on the voice interaction scene, and form the recognized keywords into the first voice through a voice synthesis technology, and report the first voice to the user.
The voice content of the first voice is a voice content for prompting the user to interact with the vehicle or the device on the vehicle, and the "interaction" may be a voice interaction performed by the user with the vehicle or the device on the vehicle, a manual interaction performed by the user with the vehicle or the device on the vehicle, or another manner of interaction performed by the user with the vehicle or the device on the vehicle. Optionally, the device on the vehicle may be a car radio, a car recorder, or other car-mounted device.
S103: and receiving operation information input by a user according to the first voice, and executing corresponding operation according to the operation information.
Specifically, after the user receives the first voice broadcasted by the vehicle-mounted system, corresponding user operation is executed according to the content of the first voice, and the user operation may be voice input of the user or manual operation of the user. After the vehicle-mounted system receives the operation information input by the user, corresponding operation is executed according to the operation information so as to ensure the interaction behavior with the user and meet the interaction requirement of the user.
According to the voice evoking method provided by the embodiment of the application, the voice interaction scene is obtained, and the voice function of the vehicle is automatically evocated through the voice interaction scene to broadcast the first voice corresponding to the current voice interaction scene for the user, so that the user is prompted to interact with the vehicle or equipment on the vehicle, and after the system of the vehicle receives the operation information input by the user according to the first voice, the corresponding operation is executed according to the operation information, so that the interaction behavior between the system and the user is ensured, and the interaction requirement of the user is met. According to the method provided by the embodiment of the application, when the voice interaction scene is acquired, voice is automatically called for the user, rich voice content is provided for the user, the user can conveniently interact with a vehicle or equipment on the vehicle, the user does not need to manually trigger the voice interaction function of the vehicle, and the intelligence of man-machine interaction is improved; in addition, the acquired voice interaction scene can correspond to the driving state of the vehicle, and the driving state of the vehicle is rich, so that the determined voice interaction scene is rich, the voice function of the vehicle can be actively called according to different driving states of the vehicle, and the applicability of voice calling of the vehicle is improved.
Fig. 3 is a flowchart illustrating a voice evoking method according to an embodiment of the present application. The embodiment relates to a specific process for acquiring a voice interaction scene. On the basis of the embodiment shown in fig. 2, as shown in fig. 3, the step S101 may specifically include:
s201: acquiring parameter information related to vehicle running; wherein the parameter information includes at least one of external driving environment information, vehicle state information, and user behavior information.
Specifically, continuing with the example where the vehicle is a vehicle, the external driving environment information may be external driving environment information, the vehicle state information may be vehicle state information, the self-condition information of the vehicle concerned may be vehicle state information, and the vehicle warning sound information may be vehicle warning sound information. Therefore, the vehicle-mounted system can acquire parameter information related to vehicle running in real time, and the parameter information can comprise at least one of external running environment information, vehicle state information and user behavior information. Optionally, the external driving environment information includes road condition information and/or weather information, the vehicle state information includes vehicle condition information and/or vehicle warning sound information, and the user behavior information may include a to-do event of the user, may also include some operation of the user for the vehicle (for example, the user inserts a USB disk into a USB interface of the vehicle), may also include some behavior habit of the user (for example, the user is used to do the same at some time), and the like.
Optionally, the traffic information may include information about whether a road is congested, whether a traffic accident occurs on a road ahead, or whether a monitoring camera is located in the road section, the vehicle condition information may include facility condition information of the vehicle, such as information about whether an engine is faulty, whether a brake pad is good, and the like, and the vehicle warning sound information may include volume information of a vehicle warning sound, and tone information of the vehicle warning sound. Alternatively, the vehicle warning sound information may also be the type of the vehicle warning sound or the number of times the vehicle warning sound is broadcast.
Optionally, when the parameter information is road condition information, the vehicle-mounted system may establish a communication connection with the traffic road monitoring server through the wireless network shown in fig. 1 to obtain the road condition information, and may also obtain the road condition information through radio broadcasting; optionally, when the parameter information is weather information, the vehicle-mounted system may obtain the weather information through a network server in the wireless network shown in fig. 1; when the parameter information is vehicle condition information, the vehicle-mounted system can acquire the vehicle condition information through different sensors or software detection of the vehicle; when the parameter information is vehicle warning sound information, the vehicle-mounted system can acquire the vehicle warning sound information through the cooperation of the vehicle-mounted sound and the processor.
Optionally, when the parameter information includes user behavior information, the specific manner of acquiring the user behavior information by the vehicle-mounted system may include two possible implementation manners, which are specifically as follows:
a first possible implementation: referring to fig. 4, the step S201 may specifically include:
s301: and sending an obtaining instruction to equipment for storing the user behavior information, wherein the obtaining instruction carries an authorization code preset by the user or the identifier of the vehicle. Alternatively, the identification of the vehicle may be an identification of a vehicle.
Specifically, the device for storing the user behavior information may be a terminal of the user, and may be various servers in a network, such as a mailbox server, a short message server, a cloud server, and the like. Optionally, the user may pre-configure an authorization code for the vehicle, and the vehicle may be authorized to obtain the behavior information of the user from the device storing the behavior information of the user. Optionally, the user may not configure an authorization code for the vehicle, and the user may register or reserve an identifier of the vehicle in the device for storing the user behavior information, and when the vehicle corresponding to the identifier accesses the device for storing the user behavior information, the behavior information of the user may be acquired from the device for storing the user behavior information. Based on this, the in-vehicle system can request to acquire the user behavior information by sending an acquisition instruction to the device storing the user behavior information.
S302: and receiving user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
Specifically, after the device storing the user behavior information receives the obtaining instruction of the vehicle, corresponding legal judgment is performed by combining the content carried in the obtaining instruction, that is, whether the authorization code in the obtaining instruction is an authorization code configured in advance by the user or whether the identifier of the vehicle carried in the obtaining instruction is the identifier of the vehicle authorized in advance by the user is judged. When the device for storing the user behavior information determines that the authorization code in the acquisition instruction or the identification of the vehicle is legal, the user behavior information can be sent to the vehicle-mounted system, so that the privacy and the safety of the user behavior information are ensured
A second possible implementation: the vehicle-mounted system can directly receive user behavior information input by a user.
In this possible embodiment, the in-vehicle system may provide a user with a user input interface, which may be a device access interface, a voice input interface, a handwriting interface of a display screen, or the like. Therefore, the user can insert a terminal or a device such as a U disk for storing the behavior information into the device access interface, so that the vehicle-mounted system can read the user behavior information, can acquire the voice input by the user through the voice input interface, and can acquire the user behavior information through the corresponding voice recognition technology and can acquire the user behavior information input by the user through the handwriting interface.
By the first possible implementation manner or the second possible implementation manner, the vehicle-mounted system can obtain the user behavior information.
S202: and determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
Specifically, after the vehicle-mounted system obtains the parameter information, the vehicle-mounted system can determine the current vehicle running state, and further determine a voice interaction scene corresponding to the current vehicle running state. Optionally, the vehicle-mounted system may determine a voice interaction scenario corresponding to the current driving state of the vehicle through a preset first mapping relationship, where the first mapping relationship may include correspondence relationships between different parameter information and different driving states of the vehicle and different voice interaction scenarios. The embodiment of the application does not limit how the voice interaction scene corresponding to the current driving state of the vehicle is determined according to the parameter information, as long as the voice interaction scene corresponding to the current driving state of the vehicle can be determined according to the parameter information.
According to the voice evoking method provided by the embodiment of the application, the voice interaction scene corresponding to the current driving state of the vehicle is determined according to the parameter information by collecting the parameter information related to the driving of the vehicle, so that the voice function of the vehicle is actively evoked according to the voice interaction scene, and the vehicle interacts with a user. According to the technical scheme, the diversity of the voice interaction scene is guaranteed through the diversity of the parameter information, so that the mode of actively calling the voice function of the vehicle is enriched, and the applicability of the voice calling of the vehicle is improved.
Fig. 5 is a flowchart illustrating a voice evoking method according to an embodiment of the present application. The embodiment relates to a specific process that a user inputs a user operation instruction to an on-board system according to content prompted by first voice, and the on-board system determines whether to stop broadcasting the first voice according to the user operation instruction. On the basis of the foregoing embodiment, as shown in fig. 5, the foregoing S103 may specifically include:
s401: and receiving a user operation instruction input by a user according to the first voice.
Specifically, continuing with the example where the vehicle is a vehicle, the first voice in this embodiment is used to instruct the user to perform a corresponding user operation. Therefore, after the user receives the first voice broadcasted by the vehicle-mounted system, the user operation indicated by the first voice is executed according to the first voice. For the vehicle-mounted system, after the user performs the user operation, the vehicle-mounted system may detect an operation instruction corresponding to the user operation through an interactive interface between the vehicle and the user or an interactive interface between a device on the vehicle and the user.
For example, when a user does not fasten a safety belt in the vehicle driving process, beep sound (which is a vehicle warning sound) is sounded inside the vehicle, that is, the current vehicle driving state is sounded, the vehicle-mounted system determines that a voice interaction scene corresponding to the vehicle driving state is a safety belt prompting scene, and then determines that a first voice corresponding to the scene is 'fastening a safety belt' according to the safety belt prompting scene, and broadcasts the first voice to the user; after the user receives the first voice, the safety belt is fastened according to the content of the first voice, so that the vehicle-mounted system can detect the operation of the user through a safety belt insertion opening provided by the vehicle, and further acquire that the operation instruction corresponding to the operation of the user is 'safety belt inserted'.
For another example, when the user does not have to put the left door in the driving process of the vehicle, a beep sound is sounded inside the vehicle (the beep sound may be different from the beep sound type in the above example), that is, the current driving state of the vehicle is sounded, the vehicle-mounted system determines that the voice interaction scene corresponding to the driving state of the vehicle is an abnormal scene of the left door, and then the vehicle-mounted system determines that the first voice corresponding to the scene is "please put the left door in close" according to the abnormal scene of the door, and broadcasts the first voice to the user; after the user receives the first voice, the left vehicle door is closed according to the content of the first voice, so that the vehicle-mounted system can detect the operation of the user through a vehicle door detection interface provided by the vehicle, and further acquire that the operation instruction corresponding to the operation of the user is 'the closed vehicle door'.
In any of the above manners, when the current vehicle driving state is the beep sound, the vehicle-mounted system can determine the voice interaction scene corresponding To the current vehicle driving state, and further determine the first voice corresponding To the voice interaction scene, that is, prompt the user for clear voice content in a Text-To-voice (TTS) manner, so that the situation that the user cannot know the meaning of the beep sound when the user rings the beep sound in the vehicle is avoided. Optionally, the vehicle-mounted system may distinguish a voice interaction scene to which the sounded vehicle warning sound should correspond by the type of the vehicle warning sound or the number of times of the vehicle warning sound, for example, when the vehicle warning sound is "drip", the vehicle-mounted system determines that the voice interaction scene to which the sounded vehicle warning sound corresponds should be a seat belt prompting scene, and when the vehicle warning sound is "click", the vehicle-mounted system determines that the voice interaction scene to which the sounded vehicle warning sound corresponds is a door abnormal scene.
It should be noted that, for the beep sound, the broadcast of the intuitive first voice to the user by the vehicle-mounted system is only an example. Optionally, when the vehicle driving state is that other abnormalities occur in the vehicle, for example, the engine oil amount of the vehicle is low, the engine of the vehicle is overheated, the glass water of the vehicle is exhausted, and the like, the vehicle-mounted system also determines the voice interaction scene corresponding to the vehicle driving state according to the current driving state, so that a first voice corresponding to the voice interaction scene is played to the user, the user is prompted to execute the operation of the user indicated by the first voice, the user can intuitively and effectively know the requirement of the vehicle, the user can find the abnormalities of the vehicle or the improper operation of the user in time, and the user experience is further improved.
S402: and judging whether the user operation is the user operation executed by the user indicated by the first voice according to the user operation instruction. If so, S403 is executed, and if not, S404 is executed.
S403: and stopping broadcasting the first voice.
S404: and playing the first voice according to a preset period.
Specifically, after the vehicle-mounted system detects a user operation instruction, whether the operation currently executed by the user is the user operation executed by the user indicated by the first voice is judged according to the user operation instruction; when the vehicle-mounted system detects that the operation currently executed by the user is the user operation executed by the user indicated by the first voice, the vehicle-mounted system stops broadcasting the first voice, and when the vehicle-mounted system detects that the operation currently executed by the user is not the user operation executed by the user indicated by the first voice, the vehicle-mounted system broadcasts the first voice according to a preset period, wherein the preset period can be that the first voice is broadcasted once every several seconds. Optionally, when the vehicle driving state targeted by the first voice sounds a beep sound, when the in-vehicle system detects that the operation currently performed by the user is not the user operation performed by the user indicated by the first voice, the beep sound may be repeatedly sounded at intervals of several seconds.
According to the voice arousing method provided by the embodiment of the application, the user operation required to be executed by the user is prompted to the user through the first voice, then after the user executes the user operation according to the instruction of the first voice, whether the user operation is the user operation executed by the user indicated by the first voice or not is judged according to the user operation instruction input by the user, and when the system of the vehicle detects that the user operation executed by the user is the user operation executed by the user indicated by the first voice, the playing of the first voice is stopped. The method provided by the embodiment of the application can enable the user to visually know the requirements of the vehicle, so that the user can find out the abnormity of the vehicle or the improper operation of the user in time, and the user experience is further improved.
Fig. 6 is a flowchart illustrating a voice evoking method according to an embodiment of the present application. The embodiment relates to a specific process that a user inputs a second voice to an on-board system according to the content prompted by the first voice, and the on-board system interacts with the user according to the second voice. In this embodiment, the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user. On the basis of the foregoing embodiment, as shown in fig. 6, the foregoing S103 may specifically include:
s501: and receiving a second voice input by the user according to the content prompted by the first voice.
Specifically, continuing with the example that the vehicle is a vehicle, the first voice in this embodiment is used to prompt the user to perform a selection operation on the content prompted by the first voice. Therefore, after the user receives the first voice broadcasted by the vehicle-mounted system, the user selects the first voice according to the content prompted by the first voice, and inputs the selection result to the vehicle-mounted system in a voice mode, wherein the selection result is the second voice. For example, when the vehicle-mounted system determines that the voice interaction scene corresponding to the current driving state is a driving mode prompting scene, the vehicle-mounted system reports a first voice to the user, whether to switch the driving mode to the economic mode is determined, and the user may input a selection result for the first voice, for example, the user may input "yes" or "no, i.e., i want to select the comfortable mode. For another example, it is assumed that the parameter information acquired by the vehicle-mounted system is a user event to be handled, the user event to be handled is set as "nine-point drive to go to an airport", and the first voice determined by the vehicle-mounted system is "whether to drive to go to the airport at present", so that once the user starts the vehicle, the vehicle-mounted system broadcasts the first voice to prompt the user of the event to be handled.
S502: and executing corresponding operation according to the second voice.
Specifically, after receiving a second voice input by the user, the vehicle-mounted system determines an operation corresponding to the second voice according to a matching mechanism combined with the vehicle, and then executes the operation to ensure interaction between the vehicle and the user. Continuing with the example illustrated in S501 as an example, assuming that the second voice input by the user is "no, i.e. i want to select the comfort mode", the in-vehicle system may learn the content of the second voice through a voice recognition technology, and then switch the driving mode to the comfort mode according to the content of the second voice, so as to meet the interaction requirement of the user and the vehicle.
Optionally, when the parameter information acquired by the vehicle-mounted system is a to-be-handled event of the user, and the to-be-handled event of the user includes an occurrence time of the to-be-handled event of the user, after the vehicle-mounted system determines the first voice corresponding to the parameter information, as shown in fig. 7, the broadcasting the first voice to the user may specifically include:
s601: and determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user.
S602: and when the moment arrives, broadcasting the first voice to the user.
Specifically, when the user event to be handled acquired by the vehicle-mounted system includes the occurrence time of the user event to be handled, the vehicle-mounted system may determine the time for broadcasting the first voice to the user according to the occurrence time, and when the time arrives, broadcast the first voice to the user. Optionally, if the time for broadcasting the first voice does not arrive when the vehicle is started, the first voice does not need to be broadcasted to the user, and the first voice starts to be broadcasted when the time arrives, or if the time for broadcasting the first voice arrives and the vehicle is not started, the vehicle-mounted system can broadcast the first voice to the user through the terminal of the user by sending the first voice to the terminal of the user, so as to prompt the user.
To describe the scheme of the present embodiment more clearly, as a specific example, referring to an example diagram shown in fig. 8, it is assumed that parameter information acquired by the vehicle-mounted system is a user to-be-handled event, the user to-be-handled event includes occurrence time of the to-be-handled event, the user to-be-handled event is set as "user X has a meeting at a location a at 9", the vehicle-mounted system determines that a time for broadcasting the first voice is 8 o' clock 30 minutes, and the vehicle-mounted system determines that the first voice is "after half an hour, there is a meeting at a location a, it is expected that 20 minutes arrive, and it is now going to or not. As shown in fig. 8, the method specifically includes:
s701: the vehicle-mounted system determines the parameter information as 'the user has a meeting at the A position at 9 points in X'.
S702: and the vehicle-mounted system determines that the voice interaction scene corresponding to the current driving state is '9 points with a conference at the position A' according to the parameter information.
S703: and the vehicle-mounted system determines that the moment of playing the first voice is 8 points and 30 points according to the voice interaction scene.
S704: when the vehicle-mounted system arrives at 8 o' clock and 30 min, the first voice is broadcasted to the user, that is, a meeting is present at the position A after half an hour, and whether the meeting is going to or not is predicted after 20 minutes.
S705: the user inputs a second voice "yes, go" to the in-vehicle system according to the first voice.
S706: the vehicle system reports "good, now navigate to location a for you" to the user.
Optionally, when the user is in the driving process, assuming that the vehicle-mounted system collects another parameter information in real time, and assuming that the parameter information is assumed to be road condition information and that the road condition information is "road congestion ahead", the vehicle-mounted system determines that the voice interaction scene corresponding to the current driving state is a "congestion scene", and further determines that the first voice corresponding to the congestion scene is "road congestion ahead", and the position a is likely to be late, and whether the call is notified to the conference initiator by a small Y "is determined, so that the user can select and input a second voice to the vehicle-mounted system after receiving the first voice, and assuming that the second voice is" yes ", the vehicle-mounted system can actively initiate a call request to the conference initiator according to the second voice.
According to the voice arousing method provided by the embodiment of the application, the user is indicated by the first voice to perform selection operation aiming at the content prompted by the first voice, so that the second voice input by the user according to the first voice is obtained, and corresponding operation is executed according to the second voice, so that man-machine interaction is more intelligent; in addition, the method provided by the embodiment of the application can acquire the potential voice triggering requirement of the user in advance, actively call the voice for the user, prompt the event to be handled of the user, avoid the user from forgetting the event to be handled, provide convenience for the user and further improve the user experience.
A voice evoking apparatus according to one or more embodiments of the present application will be described in detail below. The voice evoking device can be implemented in the infrastructure of a vehicle and can also be implemented in an interactive system of the vehicle and a wireless network. Those skilled in the art will appreciate that the voice call-out device can be constructed using commercially available hardware components configured through the steps taught in the present scheme. For example, the processor components (or processing modules, processing units, determination modules, etc.) may be implemented using components such as single-chip microprocessors, microcontrollers, microprocessors, etc. from enterprises such as Texas instruments, Intel corporation, ARM, etc.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Fig. 9 is a schematic structural diagram of a voice call-out device according to an embodiment of the present application, where the voice call-out device may be implemented by software, hardware, or a combination of the two. As shown in fig. 9, the voice call apparatus includes: the device comprises an acquisition module 10, a determination module 11, a voice broadcast module 12, a receiving module 13 and a processing module 14.
The acquiring module 10 is used for acquiring a voice interaction scene;
a determining module 11, configured to determine a corresponding first voice according to the voice interaction scene acquired by the acquiring module 10, where the first voice is a voice prompting a user to interact with the vehicle or a device on the vehicle;
the voice broadcasting module 12 is configured to broadcast the first voice determined by the determining module 11 to a user;
a receiving module 13, configured to receive operation information input by a user according to the first voice determined by the determining module 11;
a processing module 14, configured to execute a corresponding operation according to the operation information received by the receiving module 13.
The voice call-up device provided by the embodiment of the application can execute the method embodiment, the implementation principle and the technical effect are similar, and the details are not repeated herein.
In a possible implementation manner of the embodiment of the present application, the receiving module 13 is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processing module 14 is specifically configured to determine, according to the user operation instruction, whether the user operation is the user operation performed by the user indicated by the first voice, and instruct the voice broadcast module 12 to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
In another possible implementation manner of this embodiment, the receiving module 13 is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, where the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processing module 14 is specifically configured to execute a corresponding operation according to the second voice.
Further, on the basis of the above-mentioned embodiment shown in fig. 9, refer to fig. 10 for a schematic structural diagram of a voice evocator according to an embodiment of the present application. In fig. 10, the obtaining module 10 includes:
an acquisition submodule 101 for acquiring parameter information related to vehicle travel; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and the determining submodule 102 is used for determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
Optionally, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information. Optionally, the vehicle warning tone information includes a type of the vehicle warning tone or a broadcast number of the vehicle warning tone.
Optionally, if the parameter information includes user behavior information, referring to fig. 10, the obtaining sub-module 101 specifically includes: a sending unit 1011, configured to send an obtaining instruction to a device storing the user behavior information, where the obtaining instruction carries an authorization code preset by the user or an identifier of the vehicle;
a receiving unit 1012, configured to receive user behavior information sent by the device after verifying that the authorization code or the identifier of the vehicle is legal.
Optionally, the parameter information includes user behavior information, and the obtaining sub-module 101 is specifically configured to receive the user behavior information input by the user.
Optionally, the user behavior information includes a user to-do event.
Optionally, the to-do event of the user includes an occurrence time of the to-do event of the user, refer to fig. 11 for a schematic structural diagram of a voice evoking device according to an embodiment of the present application. In fig. 11, the voice broadcast module 12 may include:
a determining unit 121, configured to determine, according to an occurrence time of the event to be handled by the user, a time at which the first voice is broadcasted to the user;
and a broadcasting unit 122 configured to broadcast the first voice to the user when the time determined by the determining unit 121 arrives.
The voice call-up device provided by the embodiment of the application can execute the method embodiment, the implementation principle and the technical effect are similar, and the details are not repeated herein.
Fig. 12 is a hardware schematic diagram of a voice evoking device according to an embodiment of the present application. The voice evoking device can be integrated in the vehicle-mounted system in the embodiment or can be an independent vehicle-mounted system. As shown in fig. 12, the voice call-out device may include a processor 20, an output device 21, an input device 22, a memory 23, and at least one communication bus 24. A communication bus 24 is used to enable communication connections between the elements. The memory 23 may comprise a high speed RAM memory, and may also include a non-volatile memory NVM, such as at least one disk memory, in which various programs may be stored for performing various processing functions and implementing the method steps of the present embodiment.
Alternatively, the processor 20 may be implemented by, for example, a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processor (DSP), a Digital Signal Processing Device (DSPD), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a controller, a microcontroller, a microprocessor, or other electronic components, and the processor 20 is coupled to the input device 22 and the output device 21 through an in-vehicle line or a wireless connection.
Optionally, the input device 22 may include a variety of input devices, such as at least one of a user-oriented user interface, a device-oriented device interface, and a transceiver. Optionally, the device interface facing the device may be a wired interface for data transmission between devices, or may also be a hardware plug-in interface (for example, a USB interface, a serial port, an interface between vehicle body hardware facilities, etc.) for data or instruction transmission between devices; optionally, the user-facing user interface may be, for example, a user-facing control key, a voice input device for receiving voice input, and a touch sensing device (e.g., a touch screen with a touch sensing function, a touch pad, etc.) for receiving user touch input; optionally, the transceiver may be a radio frequency transceiver chip with a communication function, a baseband processing chip, a transceiver antenna, and the like. The voice call-out device in the embodiment of the present application is a general voice call-out device, which can be applied to any control system or control device or other types of devices. Alternatively, the output device 21 may be a corresponding output interface with a communication function, or a voice playing device or a transceiver.
Alternatively, the voice call-out device may be a voice call-out device for a vehicle, for example, a voice call-out device for a vehicle, a voice call-out device for an aircraft, a voice call-out device for a water transport, etc. With respect to the details of the voice-canceling call-out device for a vehicle, another embodiment is provided for description, and reference is made to the following embodiment, which will not be described in detail herein.
In the embodiment of the present application, the processor 20 is configured to acquire a voice interaction scene, and determine a corresponding first voice according to the voice interaction scene;
an output device 21, coupled to the processor 20, for broadcasting the first voice to a user, where the first voice is a voice prompting the user to interact with the vehicle or a device on the vehicle;
an input device 22 coupled to the processor 20 for receiving operation information input by a user according to the first voice;
the processor 20 is further configured to execute a corresponding operation according to the operation information obtained by the input device 22.
The voice call-up device provided by the embodiment of the application can execute the method embodiment, the implementation principle and the technical effect are similar, and details are not repeated herein.
Optionally, the input device 22 is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processor 20 is specifically configured to determine, according to the user operation instruction, whether the user operation is a user operation performed by the user indicated by the first voice, and instruct the output device 21 to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
Optionally, the input device 22 is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, where the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processor 20 is specifically configured to execute a corresponding operation according to the second voice.
Optionally, the input device 22 is further configured to obtain parameter information related to vehicle driving; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
the processor 20 is specifically configured to determine a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
Optionally, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information. Optionally, the vehicle warning tone information includes a type of the vehicle warning tone or a broadcast number of the vehicle warning tone.
Optionally, the parameter information includes user behavior information;
the output device 21 is further configured to send an obtaining instruction to a device storing the user behavior information, where the obtaining instruction carries an authorization code preset by the user or an identifier of the vehicle;
the input device 22 is specifically configured to receive user behavior information sent by the device after verifying that the authorization code or the identifier of the vehicle is legal.
Optionally, the parameter information includes user behavior information, and the input device 22 is specifically configured to receive user behavior information input by a user.
Optionally, the user behavior information includes a user to-do event.
Optionally, the event to be handled by the user includes an occurrence time of the event to be handled by the user;
the processor 20 is further configured to determine, according to the occurrence time of the event to be handled by the user, a time at which the first voice is broadcasted to the user;
the output device 21 is specifically configured to broadcast the first voice to the user when the time arrives.
The voice call-up device provided by the embodiment of the application can execute the method embodiment, the implementation principle and the technical effect are similar, and details are not repeated herein.
Fig. 13 is a block diagram of an in-vehicle system according to an embodiment of the present application. The vehicle-mounted system 800 may be a device integrating multiple functions, for example, the vehicle-mounted system may be a vehicle-mounted computer, a vehicle machine, etc., and the vehicle-mounted system may include the voice call-out device.
Referring to FIG. 13, an in-vehicle system 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls the overall operation of the in-vehicle system 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 802 may include one or more processors 820 executing instructions to perform all or some of the steps of S101-S706 in the voice evoking method described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operation at the in-vehicle system 800. Examples of such data include instructions for any application or method operating on the in-vehicle system 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the in-vehicle system 800. Power components 806 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for in-vehicle system 800.
The multimedia component 808 includes a screen that provides an output interface between the in-vehicle system 800 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 may also include a front facing camera.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive an external audio signal when the in-vehicle system 800 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be click wheels, buttons, etc. These buttons may include, but are not limited to: a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the on-board system 800. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the in-vehicle system 800 and other devices. The in-vehicle system 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the in-vehicle system 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described voice evoking methods.
On the basis of the description about the general voice evoking device in fig. 12, the present application also provides another embodiment, and the present embodiment specifically discloses a voice evoking device for a vehicle. Alternatively, the voice call-out device may be integrated in a central control system of the vehicle, for example, in the vehicle-mounted system according to the above-described embodiment. Alternatively, the vehicle-mounted system may be a system integrated with a vehicle machine on the vehicle, such as a vehicle-mounted navigation system and/or a vehicle-mounted entertainment system, and may also be a system including the vehicle machine and other devices of the vehicle, such as sensors and the like. Optionally, the voice call-out device for a vehicle includes but is not limited to: vehicle equipment, control equipment attached after the vehicle leaves the factory, and the like.
Specifically, the voice call-out device for the vehicle may include; an onboard input device, an onboard processor, an onboard output device, and other additional devices. It should be noted that, in the "onboard input device", "onboard output device", and "onboard processor" related to the embodiment of the present application, the onboard input device "," onboard output device ", and" onboard processor "may be carried on a vehicle, or the" onboard input device "," onboard output device ", and" onboard processor "may be carried on an aircraft, or may be carried on other types of vehicles, and the meaning of the" onboard "is not limited in the embodiment of the present application. Taking the vehicle as an example, the onboard input device may be an onboard input device, the onboard processor may be an onboard processor, and the onboard output device may be an onboard output device.
Depending on the type of vehicle being installed, the onboard processor may be implemented using various Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), Central Processing Units (CPUs), controllers, micro-controllers, microprocessors, or other electronic components, and may be used to perform the methods described above. The onboard processor is coupled to the onboard input device and the onboard output device via an in-vehicle line or wireless connection. According to the method in the embodiment corresponding to fig. 2 to 8, the onboard processor is configured to acquire a voice interaction scenario, and determine a corresponding first voice according to the voice interaction scenario.
Depending on the type of vehicle in which it is installed, the onboard output device may be an interface capable of interacting with a user (e.g., a voice announcement device, speakers, headphones, etc.) or may be a transceiver that establishes wireless transmissions with a user's handheld device or the like, which may be coupled to the onboard input device and onboard processor by in-vehicle wiring or wirelessly. According to the method in the embodiment corresponding to fig. 2 to 8, the onboard output device is configured to broadcast the first voice to the user, where the first voice is a voice prompting the user to interact with the vehicle or a device on the vehicle.
Depending on the type of vehicle in which it is installed, the onboard input device may include a variety of input devices, and may include, for example, at least one of a user-facing in-vehicle user interface, a device-facing in-vehicle device interface, and a transceiver. Optionally, the device interface facing the device may be a wired interface for data transmission between the devices (for example, a connection interface with a vehicle data recorder on a console of the vehicle, a line interface between the console of the vehicle and a vehicle door, a hardware interface between the console of the vehicle and a vehicle-mounted air conditioner), a hardware plug-in interface for data transmission between the devices (for example, a USB interface, a serial port, etc.), a seat belt socket of the vehicle, an interface between hardware facilities such as a vehicle engine and other control devices, etc.; alternatively, the user-oriented in-vehicle user interface may be, for example, a steering wheel control key for a vehicle, a center control key for a large or small vehicle, a voice input device for receiving voice input (e.g., a microphone mounted on a steering wheel or an operating rudder, a central sound collection device, etc.), and a touch sensing device (e.g., a touch screen with touch sensing function, a touch pad, etc.) for receiving user touch input by a user; optionally, the transceiver may be a radio frequency transceiver chip, a baseband processing chip, a transceiver antenna, and the like, which have a communication function in a vehicle. According to the method in the embodiment corresponding to fig. 2 to 8, the onboard input device is used for receiving the operation information input by the user according to the first voice, and the onboard processor is further used for executing corresponding operation according to the operation information obtained by the onboard input device.
Further, the onboard processor may also be used for all or part of the steps in the embodiments corresponding to fig. 3 to fig. 8 in the message pushing method, which is not described herein again in this embodiment of the application.
A computer/processor readable storage medium having stored therein program instructions for causing a computer/processor to perform:
acquiring a voice interaction scene;
determining corresponding first voice according to the voice interaction scene, and broadcasting the first voice to a user, wherein the first voice is voice prompting the user to interact with the vehicle or equipment on the vehicle;
and receiving operation information input by a user according to the first voice, and executing corresponding operation according to the operation information.
Optionally, the receiving operation information input by the user according to the first voice, and executing a corresponding operation according to the operation information specifically includes:
receiving a user operation instruction input by a user according to the first voice;
judging whether the user operation is the user operation executed by the user indicated by the first voice according to the user operation instruction;
if so, stopping broadcasting the first voice.
Optionally, the receiving operation information input by the user according to the first voice, and executing a corresponding operation according to the operation information specifically includes:
receiving a second voice input by a user according to the content prompted by the first voice, wherein the first voice is used for prompting the user to perform selection operation aiming at the content prompted by the first voice, and the second voice is a selection result of the user;
and executing corresponding operation according to the second voice.
Optionally, the obtaining of the voice interaction scene corresponding to the current driving state of the vehicle specifically includes:
acquiring parameter information related to vehicle running; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
Optionally, the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information. Optionally, the vehicle warning tone information includes a type of the vehicle warning tone or a broadcast number of the vehicle warning tone.
Optionally, the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
sending an obtaining instruction to equipment for storing the user behavior information, wherein the obtaining instruction carries an authorization code preset by the user or the identifier of the vehicle;
and receiving user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
Optionally, the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
user behavior information input by a user is received.
Optionally, the user behavior information includes a user to-do event.
Optionally, the event to be handled by the user includes the occurrence time of the event to be handled by the user, and the broadcasting the first voice to the user specifically includes:
determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user;
and when the moment arrives, broadcasting the first voice to the user.
The readable storage medium may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
On the basis of the above embodiment, the application further provides a vehicle-mounted internet operating system. Those skilled in the art will appreciate that the computer program which the vehicle-mounted internet operating system can manage and control the hardware of the voice evoking device shown in fig. 12 or fig. 13 or the hardware of the vehicle-mounted system or the hardware of the voice evoking device for a vehicle referred to in the present application and the software resources referred to in the present application is software which runs directly on the voice evoking device or the voice evoking device for a vehicle referred to in the present application or the vehicle-mounted system referred to in the above fig. 13. The operating system may be an interface between the user and the voice-activated device or a voice-activated device for a vehicle, or may be an interface between hardware and other software.
The vehicle-mounted internet operating system can interact with other modules or functional equipment on a vehicle to control functions of the corresponding modules or functional equipment.
Specifically, taking the vehicle in the above embodiments as an example, and the voice call device is a vehicle machine on the vehicle as an example, based on the development of the vehicle-mounted internet operating system and the vehicle communication technology provided by the present application, the vehicle is no longer independent of the communication network, and the vehicle and the server or the network server may be connected to each other to form a network, so as to form a vehicle-mounted internet. The vehicle-mounted internet system can provide voice communication service, positioning service, navigation service, mobile internet access, vehicle emergency rescue, vehicle data and management service, vehicle-mounted entertainment service and the like.
The following describes in detail a schematic structural diagram of the vehicle-mounted internet operating system provided by the present application. Fig. 14 is a schematic structural diagram of an in-vehicle internet operating system according to an embodiment of the present application. As shown in fig. 14, the operating system provided by the present application includes:
the voice control unit 31 determines a corresponding first voice according to the acquired voice interaction scene and broadcasts the first voice to the user;
the operation control unit 32 is used for controlling the voice call-up system to execute corresponding operation according to the operation information acquired by the vehicle-mounted input equipment; and the operation information is input to the vehicle-mounted input equipment by the user according to the first voice.
Specifically, the voice evoking system in this embodiment may include part of hardware of the voice evoking device in the above embodiment, for example, may include the processor and the output device in the above embodiment. The voice call-out system can be integrated in the vehicle-mounted Internet operating system and can also be used as a system for assisting the vehicle-mounted Internet operating system to execute corresponding functional operations.
The voice control unit 31 may control the voice evocator system to determine the voice interaction scene corresponding to the current driving state according to at least one type of parameter information of the collected external driving environment information, the collected vehicle state information, and the collected user behavior information. Optionally, the voice interaction scenario may be acquired by the voice control unit 31, and may also be acquired by the voice control unit 31 controlling the voice evoking system.
In addition, the vehicle-mounted input device in the embodiment may include the input device in the above-described embodiment, that is, after the voice control unit 31 controls the voice evocator system to broadcast the first voice to the user, the user inputs the operation information to the vehicle-mounted input device according to the first voice, so that the operation control unit 32 may control the voice evocator system to perform the corresponding operation according to the operation information.
Further, the vehicle-mounted internet operating system may control the corresponding components to perform the methods described in fig. 2 to 8 through the voice control unit 31 and the operation control unit 32, or on the basis of the two units, in combination with other units.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (32)

1. A method of voice arousal, comprising:
acquiring a voice interaction scene; the voice interaction scene is a scene capable of actively triggering a vehicle-mounted system to call a voice function to interact with the broadcast voice of a user, and the vehicle-mounted system establishes mapping relations between different voice interaction scenes and different broadcast voices in advance;
determining corresponding first voice according to the voice interaction scene, and actively broadcasting the first voice to a user, wherein the first voice is voice for prompting the user to interact with a vehicle or equipment on the vehicle;
receiving operation information input by a user according to the first voice, and executing corresponding operation according to the operation information;
the acquiring of the voice interaction scene specifically includes:
acquiring parameter information related to the running of the vehicle; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
2. The method according to claim 1, wherein the receiving operation information input by the user according to the first voice and executing a corresponding operation according to the operation information specifically includes:
receiving a user operation instruction input by a user according to the first voice;
judging whether the user operation is the user operation executed by the user indicated by the first voice according to the user operation instruction;
if so, stopping broadcasting the first voice.
3. The method according to claim 1, wherein the receiving operation information input by the user according to the first voice and executing a corresponding operation according to the operation information specifically includes:
receiving a second voice input by a user according to the content prompted by the first voice, wherein the first voice is used for prompting the user to perform selection operation aiming at the content prompted by the first voice, and the second voice is a selection result of the user;
and executing corresponding operation according to the second voice.
4. The method according to claim 1, wherein the external driving environment information includes road condition information and/or weather information, and the vehicle state information includes vehicle own condition information and/or vehicle warning sound information.
5. The method according to claim 4, wherein the vehicle warning tone information includes a type of the vehicle warning tone or a number of announcements of the vehicle warning tone.
6. The method according to claim 1, wherein the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
sending an obtaining instruction to equipment for storing the user behavior information, wherein the obtaining instruction carries an authorization code preset by the user or the identifier of the vehicle;
and receiving user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
7. The method according to claim 1, wherein the parameter information includes user behavior information, and the acquiring parameter information related to vehicle driving specifically includes:
user behavior information input by a user is received.
8. The method according to claim 6 or 7, wherein the user behavior information comprises a user to-do event.
9. The method according to claim 8, wherein the user to-do event includes an occurrence time of the user to-do event, and the broadcasting the first voice to the user specifically includes:
determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user;
and when the moment arrives, broadcasting the first voice to the user.
10. A voice evoking device, comprising:
the acquisition module is used for acquiring a voice interaction scene; the voice interaction scene is a scene capable of actively triggering a vehicle-mounted system to call a voice function to interact with the broadcast voice of a user, and the vehicle-mounted system establishes mapping relations between different voice interaction scenes and different broadcast voices in advance;
the determining module is used for determining corresponding first voice according to the voice interaction scene acquired by the acquiring module, wherein the first voice is voice for prompting a user to interact with a vehicle or equipment on the vehicle;
the voice broadcasting module is used for actively broadcasting the first voice determined by the determining module to a user;
the receiving module is used for receiving the operation information of the first voice input determined by the user according to the determining module;
the processing module is used for executing corresponding operation according to the operation information received by the receiving module;
wherein, the obtaining module includes:
the acquisition submodule is used for acquiring parameter information related to vehicle running; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and the determining submodule is used for determining a voice interaction scene corresponding to the current running state of the vehicle according to the parameter information.
11. The device according to claim 10, wherein the receiving module is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processing module is specifically configured to determine, according to the user operation instruction, whether the user operation is the user operation performed by the user indicated by the first voice, and instruct the voice broadcasting module to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
12. The apparatus according to claim 10, wherein the receiving module is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, where the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processing module is specifically configured to execute a corresponding operation according to the second voice.
13. The apparatus of claim 10, wherein the external driving environment information comprises road condition information and/or weather information, and the vehicle state information comprises vehicle own condition information and/or vehicle warning sound information.
14. The apparatus according to claim 13, wherein the vehicle warning tone information includes a type of the vehicle warning tone or a number of announcements of the vehicle warning tone.
15. The apparatus according to claim 10, wherein the parameter information includes user behavior information, and the obtaining sub-module specifically includes:
a sending unit, configured to send an obtaining instruction to a device that stores the user behavior information, where the obtaining instruction carries an authorization code preset by the user or an identifier of the vehicle;
and the receiving unit is used for receiving the user behavior information sent by the equipment after verifying that the authorization code or the identification of the vehicle is legal.
16. The apparatus according to claim 10, wherein the parameter information includes user behavior information, and the obtaining sub-module is specifically configured to receive user behavior information input by a user.
17. The apparatus of claim 15 or 16, wherein the user behavior information comprises a user to-do event.
18. The apparatus of claim 17, wherein the user to-do event comprises an occurrence time of the user to-do event; the voice broadcast module includes:
the determining unit is used for determining the time for broadcasting the first voice to the user according to the occurrence time of the event to be handled of the user;
and the broadcasting unit is used for broadcasting the first voice to the user when the time determined by the determining unit arrives.
19. A voice evoking device, comprising:
the processor is used for acquiring a voice interaction scene and determining a corresponding first voice according to the voice interaction scene; the voice interaction scene is a scene which can actively trigger a vehicle-mounted system to call a voice function to interact with the broadcast voice of a user, and the vehicle-mounted system establishes mapping relations between different voice interaction scenes and different broadcast voices in advance;
an output device, coupled to the processor, for actively broadcasting the first voice to a user, the first voice being a voice prompting the user to interact with a vehicle or a device on the vehicle;
the input device is coupled to the processor and used for receiving operation information input by a user according to the first voice;
the processor is further configured to execute a corresponding operation according to the operation information obtained by the input device;
the input device is also used for acquiring parameter information related to the running of a vehicle; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
the processor is specifically configured to determine a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
20. The device according to claim 19, wherein the input device is specifically configured to receive a user operation instruction input by a user according to the first voice;
the processor is specifically configured to determine, according to the user operation instruction, whether the user operation is a user operation performed by a user indicated by the first voice, and instruct the output device to stop broadcasting the first voice when it is determined that the user operation is the user operation performed by the user indicated by the first voice.
21. The device according to claim 19, wherein the input device is specifically configured to receive a second voice input by the user according to the content prompted by the first voice, the first voice is used to prompt the user to perform a selection operation on the content prompted by the first voice, and the second voice is a selection result of the user;
the processor is specifically configured to execute a corresponding operation according to the second voice.
22. The apparatus of claim 19, wherein the external driving environment information comprises road condition information and/or weather information, and the vehicle state information comprises vehicle self condition information and/or vehicle warning sound information.
23. The apparatus according to claim 22, wherein the vehicle warning tone information includes a type of the vehicle warning tone or a broadcast number of the vehicle warning tone.
24. The apparatus of claim 19, wherein the parameter information comprises user behavior information;
the output device is further configured to send an acquisition instruction to a device storing the user behavior information, where the acquisition instruction carries an authorization code preset by the user or an identifier of the vehicle;
the input device is specifically configured to receive user behavior information sent by the device after verifying that the authorization code or the identifier of the vehicle is legitimate.
25. The device of claim 19, wherein the parameter information comprises user behavior information, and wherein the input device is specifically configured to receive user behavior information input by a user.
26. The device of claim 24 or 25, wherein the user behavior information comprises a user to-do event.
27. The device of claim 26, wherein the user to-do event comprises an occurrence time of the user to-do event;
the processor is further configured to determine a time for broadcasting the first voice to the user according to the occurrence time of the event to be handled by the user;
and the output equipment is specifically used for broadcasting the first voice to the user when the time arrives.
28. A voice arousing apparatus for a vehicle, comprising: the system comprises an airborne processor, an airborne output device and an airborne input device;
the onboard processor is used for acquiring a voice interaction scene and determining corresponding first voice according to the voice interaction scene; the voice interaction scene is a scene which can actively trigger a vehicle-mounted system to call a voice function to interact with the broadcast voice of a user, and the vehicle-mounted system establishes mapping relations between different voice interaction scenes and different broadcast voices in advance;
the onboard output device is coupled to the onboard processor and is used for actively broadcasting the first voice to a user, wherein the first voice is voice for prompting the user to interact with the vehicle or a device on the vehicle;
the onboard input device is coupled to the onboard processor and used for receiving operation information input by a user according to the first voice;
the onboard processor is further used for executing corresponding operation according to the operation information obtained by the onboard input equipment;
the airborne input equipment is also used for acquiring parameter information related to the running of a vehicle; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
the onboard processor is specifically used for determining a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
29. The voice recall device for a vehicle of claim 28 wherein the onboard input device comprises at least one of a user-facing onboard user interface, a device-facing onboard device interface, a transceiver.
30. The voice evoking device for a vehicle of claim 29 wherein the user-facing in-vehicle user interface comprises one or more of:
a console control key;
a steering wheel control button;
a voice receiving device;
touch sensing equipment.
31. Voice arousing device for vehicles according to any one of claims 28 to 30, characterised in that said on-board processor is also adapted to perform the method according to any one of the preceding claims 2 to 9.
32. An in-vehicle internet operating system, comprising:
the voice control unit determines corresponding first voice according to the acquired voice interaction scene and actively broadcasts the first voice to the user; the voice interaction scene is a scene which can actively trigger a vehicle-mounted system to call a voice function to interact with the broadcast voice of a user, and the vehicle-mounted system establishes mapping relations between different voice interaction scenes and different broadcast voices in advance;
the operation control unit is used for controlling the voice call-up system to execute corresponding operation according to the operation information acquired by the vehicle-mounted input equipment; the operation information is input to the vehicle-mounted input equipment by a user according to the first voice;
the voice control unit or the voice call-up system is also used for acquiring parameter information related to the running of a vehicle; wherein the parameter information includes at least one of external driving environment information, vehicle state information and user behavior information;
and the voice control unit is used for controlling the voice evocative system to determine a voice interaction scene corresponding to the current driving state of the vehicle according to the parameter information.
CN201610246576.5A 2016-04-20 2016-04-20 Voice call-up method, device and equipment Active CN107303909B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201610246576.5A CN107303909B (en) 2016-04-20 2016-04-20 Voice call-up method, device and equipment
PCT/CN2017/080387 WO2017181901A1 (en) 2016-04-20 2017-04-13 Voice wake-up method, apparatus and device
TW106112807A TW201742424A (en) 2016-04-20 2017-04-17 Voice wake-up method, apparatus and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610246576.5A CN107303909B (en) 2016-04-20 2016-04-20 Voice call-up method, device and equipment

Publications (2)

Publication Number Publication Date
CN107303909A CN107303909A (en) 2017-10-31
CN107303909B true CN107303909B (en) 2020-06-23

Family

ID=60115613

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610246576.5A Active CN107303909B (en) 2016-04-20 2016-04-20 Voice call-up method, device and equipment

Country Status (3)

Country Link
CN (1) CN107303909B (en)
TW (1) TW201742424A (en)
WO (1) WO2017181901A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107878467B (en) * 2017-11-10 2020-07-17 爱驰汽车(上海)有限公司 Voice broadcasting method and system for automobile
CN110096249A (en) * 2018-01-31 2019-08-06 阿里巴巴集团控股有限公司 Methods, devices and systems for prompting fast to wake up word
CN108520744B (en) * 2018-03-15 2020-11-10 斑马网络技术有限公司 Voice control method and device, electronic equipment and storage medium
CN108923808A (en) * 2018-06-05 2018-11-30 上海博泰悦臻网络技术服务有限公司 Vehicle and its car-mounted terminal and speech interaction mode active triggering method
CN110874202B (en) * 2018-08-29 2024-04-19 斑马智行网络(香港)有限公司 Interaction method, device, medium and operating system
CN111007938B (en) * 2018-10-08 2023-11-28 盒马(中国)有限公司 Interactive device and processing method and device thereof
CN109741740B (en) * 2018-12-26 2021-04-16 苏州思必驰信息科技有限公司 Voice interaction method and device based on external trigger
CN109532725A (en) * 2019-01-09 2019-03-29 北京梧桐车联科技有限责任公司 A kind of onboard system
CN109817214B (en) * 2019-03-12 2021-11-23 阿波罗智联(北京)科技有限公司 Interaction method and device applied to vehicle
CN111724772A (en) * 2019-03-20 2020-09-29 阿里巴巴集团控股有限公司 Interaction method and device of intelligent equipment and intelligent equipment
CN110209278A (en) * 2019-05-30 2019-09-06 广州小鹏汽车科技有限公司 People-car interaction method, apparatus, storage medium and controlling terminal
CN110203209A (en) * 2019-06-05 2019-09-06 广州小鹏汽车科技有限公司 A kind of phonetic prompt method and device
CN112109729B (en) * 2019-06-19 2023-06-06 宝马股份公司 Man-machine interaction method, device and system for vehicle-mounted system
CN112447180A (en) * 2019-08-30 2021-03-05 华为技术有限公司 Voice wake-up method and device
CN111204339B (en) * 2019-12-31 2022-02-08 浙江合众新能源汽车有限公司 Method and device for actively starting LKA function through voice
CN111086511B (en) * 2019-12-31 2021-05-07 浙江合众新能源汽车有限公司 Method and device for actively starting TJA function of automobile through voice
CN112092820B (en) * 2020-09-03 2022-03-18 广州小鹏汽车科技有限公司 Initialization setting method for vehicle, and storage medium
CN112201240B (en) * 2020-09-27 2023-03-14 上汽通用五菱汽车股份有限公司 Vehicle control method, vehicle-mounted screenless device, server and readable storage medium
CN113113015A (en) * 2020-11-17 2021-07-13 广州小鹏汽车科技有限公司 Interaction method, information processing method, vehicle and server
CN112634551B (en) * 2020-11-30 2022-07-08 中油国家油气钻井装备工程技术研究中心有限公司 Voice alarm control method in driller room
CN113548062B (en) * 2021-08-03 2022-12-30 奇瑞汽车股份有限公司 Interactive control method and device for automobile and computer storage medium
CN113990322B (en) * 2021-11-04 2023-10-31 广州小鹏汽车科技有限公司 Voice interaction method, server, voice interaction system and medium
CN114465837B (en) * 2022-01-30 2024-03-08 云知声智能科技股份有限公司 Collaborative wake-up processing method and device for intelligent voice equipment
CN114559886B (en) * 2022-02-26 2024-03-29 东莞市台铃车业有限公司 Anti-false touch control method, system, device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257680A (en) * 2008-03-26 2008-09-03 宇龙计算机通信科技(深圳)有限公司 Mobile terminal with navigation function and navigation method
CN101951553A (en) * 2010-08-17 2011-01-19 深圳市子栋科技有限公司 Navigation method and system based on speech command
CN104535074A (en) * 2014-12-05 2015-04-22 惠州Tcl移动通信有限公司 Bluetooth earphone-based voice navigation method, system and terminal
CN204736855U (en) * 2015-06-07 2015-11-04 沈陆垚 Modularization intelligence car speech control and safe support system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110040707A1 (en) * 2009-08-12 2011-02-17 Ford Global Technologies, Llc Intelligent music selection in vehicles
US20120268294A1 (en) * 2011-04-20 2012-10-25 S1Nn Gmbh & Co. Kg Human machine interface unit for a communication device in a vehicle and i/o method using said human machine interface unit
KR101724748B1 (en) * 2011-12-06 2017-04-19 현대자동차주식회사 Speech recognition apparatus for vehicle
CN102941852B (en) * 2012-10-30 2015-12-02 青岛海信网络科技股份有限公司 Intelligent vehicle mounted terminal
DE102012220131A1 (en) * 2012-11-06 2014-05-08 Robert Bosch Gmbh A method of activating voice interaction with an occupant of a vehicle and voice interaction system for a vehicle

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257680A (en) * 2008-03-26 2008-09-03 宇龙计算机通信科技(深圳)有限公司 Mobile terminal with navigation function and navigation method
CN101951553A (en) * 2010-08-17 2011-01-19 深圳市子栋科技有限公司 Navigation method and system based on speech command
CN104535074A (en) * 2014-12-05 2015-04-22 惠州Tcl移动通信有限公司 Bluetooth earphone-based voice navigation method, system and terminal
CN204736855U (en) * 2015-06-07 2015-11-04 沈陆垚 Modularization intelligence car speech control and safe support system

Also Published As

Publication number Publication date
WO2017181901A1 (en) 2017-10-26
CN107303909A (en) 2017-10-31
TW201742424A (en) 2017-12-01

Similar Documents

Publication Publication Date Title
CN107303909B (en) Voice call-up method, device and equipment
WO2017181900A1 (en) Message pushing method, device, and apparatus
US9612999B2 (en) Method and system for supervising information communication based on occupant and vehicle environment
US9188449B2 (en) Controlling in-vehicle computing system based on contextual data
US9630496B2 (en) Rear occupant warning system
KR101549559B1 (en) Input device disposed in handle and vehicle including the same
CN107771399B (en) Wireless connection management
JP2016042692A (en) Driver status indicator
US20110281562A1 (en) Providing customized in-vehicle services using a vehicle telematics unit
CN106921783A (en) Use terminal device, the system and method for communication and social networking application when driving safely
CN111179930B (en) Method and system for realizing intelligent voice interaction in driving process
US9444943B2 (en) Multimedia apparatus, method, and computer readable medium for providing hands-free service for vehicle
CN114629991A (en) Prompting method and device based on vehicle connection
CN111532259A (en) Remote control method and device for automobile and storage medium
US10911589B2 (en) Vehicle control device
WO2017100790A1 (en) Enhanced navigation instruction and user determination
CN111885559A (en) Intelligent device searching method, vehicle-mounted device system and searching device
CN105610896A (en) Method and system for launching an application
CN112153522A (en) Vehicle-mounted sound box control system and vehicle-mounted sound box
KR100879888B1 (en) Vehicle ID Service System ? Method Using ECU
CN110337057A (en) A kind of based reminding method and device for vehicle service
CN109040949A (en) A kind of car key management system
WO2023227014A1 (en) Privacy protection method and related apparatus
CN112153521B (en) Vehicle-mounted sound box control system and vehicle-mounted sound box
CN115915080A (en) Vehicle-mounted Bluetooth management method, system, state manager and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant