CN108476263B - Vehicle-mounted reminding method and terminal - Google Patents

Vehicle-mounted reminding method and terminal Download PDF

Info

Publication number
CN108476263B
CN108476263B CN201780005848.5A CN201780005848A CN108476263B CN 108476263 B CN108476263 B CN 108476263B CN 201780005848 A CN201780005848 A CN 201780005848A CN 108476263 B CN108476263 B CN 108476263B
Authority
CN
China
Prior art keywords
terminal
voice
vehicle
data
mounted state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780005848.5A
Other languages
Chinese (zh)
Other versions
CN108476263A (en
Inventor
石柳
勾军委
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN108476263A publication Critical patent/CN108476263A/en
Application granted granted Critical
Publication of CN108476263B publication Critical patent/CN108476263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/02Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Abstract

The application provides a vehicle-mounted reminding method, which comprises the following steps: the terminal acquires data, and if the terminal determines that the terminal is in a vehicle-mounted state according to the data, the terminal automatically starts a voice interaction function. The data comprises sound data and/or sensor data, the vehicle-mounted state is a state that the terminal is in a vehicle, the sound data comprises sound data generated in a mutual switching process of the vehicle-mounted state and the off-board state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the off-board state. Therefore, the terminal can automatically start the voice interaction function under the condition that the vehicle-mounted state is recognized, and therefore the control convenience of the terminal can be improved.

Description

Vehicle-mounted reminding method and terminal
The present application claims priority from chinese patent application entitled "a method and apparatus for actively providing voice services" filed by the chinese patent office at 27/12/2016, application number 201611225036.5, which is incorporated herein by reference in its entirety.
Technical Field
The application relates to the field of communication, in particular to a vehicle-mounted reminding method and a terminal.
Background
With the spread of vehicles, in-vehicle functions become an indispensable configuration of terminals. At present, the vehicle-mounted functions of the terminal are mainly embodied in that interaction is carried out between voice and a user, so that the user can inquire weather, navigate, make a call and the like on the terminal through the voice, and manual operation is avoided.
However, the voice interaction function of the terminal requires the user to manually trigger, for example, pressing a function key for a long time triggers the voice robot "siri". Therefore, the terminal with the vehicle-mounted function usually needs a user to actively trigger the vehicle-mounted function, so that the user control convenience is poor, and even potential safety hazards are brought to driving.
Disclosure of Invention
The application provides a vehicle-mounted reminding method and a terminal, and aims to solve the technical problem that the existing terminal needs a user to actively trigger a vehicle-mounted function, so that the control convenience is poor.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a vehicle-mounted reminding method, including: the terminal acquires data, and if the terminal determines that the terminal is in a vehicle-mounted state according to the data, the terminal automatically starts a voice interaction function. The data comprises sound data and/or sensor data, the vehicle-mounted state is a state that the terminal is in a vehicle, the sound data comprises sound data generated in a mutual switching process of the vehicle-mounted state and the off-board state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the off-board state. Therefore, the terminal can automatically start the voice interaction function under the condition that the vehicle-mounted state is recognized, and therefore the control convenience of the terminal can be improved.
In one implementation, the initiating the voice interaction function includes: sending first inquiry voice, wherein the first inquiry voice comprises at least one of whether to start an air conditioner, whether to start an audio application, whether to start navigation and whether to broadcast weather; and receiving and executing a first voice instruction, wherein the first voice instruction comprises a response instruction of the first inquiry voice. And the inquiry voice is actively sent, so that the use convenience of the terminal is further embodied, and the use experience of the user is improved.
In one implementation, the method further comprises: if the terminal does not acquire the sound data of the engine, sending a second inquiry voice for judging whether the engine is started or not; receiving a second voice instruction, wherein the second voice instruction comprises a response instruction of the second inquiry voice. The voice inquiring whether to start the engine is actively sent out, so that the use convenience of the terminal is further embodied, and the use experience of a user is further improved.
In one implementation, the sensor data further comprises: acceleration data. The method further comprises the following steps: and if the terminal confirms that the time of the terminal in the driving state is greater than a preset threshold value according to the acceleration data, prompting fatigue driving by voice. The voice prompt fatigue driving comprises at least one of the following steps: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position. The prompt of fatigue driving is beneficial to improving the driving safety, so that the use convenience of the terminal is further embodied.
In one implementation, the sensor data further comprises: acceleration data. The method further comprises the following steps: and if the terminal determines that the terminal is in a stop driving state according to the acceleration data, the terminal records a parking position. Further, the terminal recording the parking position includes: and the terminal displays the parking position on the terminal in a form of a notification message. The method further comprises the following steps: and if a viewing instruction of the notification message is received, the terminal prompts navigation information from the current position to the stop position. The recording of the stop position and the generation of the navigation information are convenient for a user to find the vehicle, so that the use convenience of the terminal is further embodied.
A second aspect of the present application provides a terminal, comprising: input devices, output devices, processors, and buses. The input device is configured to obtain data, the data including sound data and/or sensor data. And the processor is used for automatically starting the voice interaction function if the terminal is determined to be in the vehicle-mounted state according to the data. The vehicle-mounted state is a state in which the terminal is in a vehicle, the sound data includes sound data generated in a mutual switching process between the vehicle-mounted state and the off-board state, and the sensor data includes sensor data that changes in the mutual switching process between the vehicle-mounted state and the off-board state. The output device is used for outputting interactive voice according to the indication of the processor. The terminal can automatically trigger the vehicle-mounted voice interaction function, so that the terminal has more convenient control performance.
In one implementation, the processor for initiating a voice interaction function includes: the processor instructs the output device to send out first inquiry voice, wherein the first inquiry voice comprises at least one of whether to start an air conditioner, whether to start an audio application, whether to start navigation and whether to broadcast weather. The input device is further configured to receive a first voice instruction, where the first voice instruction includes a response instruction of the first query voice. The processor is further configured to execute the first voice instruction. And the inquiry voice is actively sent, so that the use convenience of the terminal is further embodied, and the use experience of the user is improved.
In one implementation, the output device is further configured to send a second query voice of whether to start the engine if the sound data of the engine is not collected by the input device. The input device is further configured to receive a second voice instruction, where the second voice instruction includes a response instruction of the second query voice. The voice inquiring whether to start the engine is actively sent out, so that the use convenience of the terminal is further embodied, and the use experience of a user is further improved.
In one implementation, the sensor data further comprises: acceleration data. The output device is further used for prompting fatigue driving by voice if the processor confirms that the time of the terminal in the driving state is greater than a preset threshold value according to the acceleration data. The voice prompt fatigue driving comprises at least one of the following steps: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position. The prompt of fatigue driving is beneficial to improving the driving safety, so that the use convenience of the terminal is further embodied.
In one implementation, the sensor data further comprises: acceleration data. The processor is further used for determining that the terminal is in a stop driving state according to the acceleration data; and if the terminal is determined to be in the stop driving state, recording a parking position.
In one implementation, the output device is further configured to display the parking position on the terminal in the form of a notification message. The input device is further configured to obtain an instruction to view the notification message. And the output equipment is also used for prompting navigation information from the current position to the stop position according to the instruction. The parking position is recorded to provide navigation information of the parking position, so that a user can conveniently and quickly find a vehicle, and the use convenience of the terminal is further embodied.
A third aspect of the present application provides a terminal, comprising: the device comprises a data acquisition module and a voice control module. The data acquisition module is used for acquiring data, and the data comprises sound data and/or sensor data. The voice control module is used for automatically starting a voice interaction function if the terminal is determined to be in a vehicle-mounted state according to the data, the vehicle-mounted state is a state that the terminal is in a vehicle, the voice data comprises voice data generated in the mutual switching process of the vehicle-mounted state and the non-vehicle-mounted state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the non-vehicle-mounted state. The terminal can automatically trigger the vehicle-mounted voice interaction function, so that the terminal has more convenient control performance.
In one implementation, the voice control module for initiating the voice interaction function includes: the voice control module sends out first inquiry voice, and the first inquiry voice comprises at least one of whether an air conditioner is started or not, whether an audio application is started or not, whether navigation is started or not and whether weather is broadcasted or not. And receiving and executing a first voice instruction, wherein the first voice instruction comprises a response instruction of the first inquiry voice. And the inquiry voice is actively sent, so that the use convenience of the terminal is further embodied, and the use experience of the user is improved.
In one implementation, the voice control module is further configured to send a second query voice of whether to start the engine if the data module does not collect the sound data of the engine. And receiving a second voice instruction, wherein the second voice instruction comprises a response instruction of the second inquiry voice. The method is favorable for further embodying the use convenience of the terminal so as to improve the use experience of the user.
In one implementation, the sensor data further comprises: acceleration data. The voice control module is further used for prompting fatigue driving by voice if the time that the terminal is in the driving state is confirmed to be larger than a preset threshold value according to the acceleration data. The voice prompt fatigue driving comprises at least one of the following steps: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position. The method is favorable for further embodying the use convenience of the terminal so as to improve the use experience of the user.
In one implementation, the sensor data further comprises: acceleration data. The voice control module is further used for recording a parking position by the terminal if the terminal is determined to be in a stop driving state according to the acceleration data.
In one implementation, the voice control module displays the parking position on the terminal in the form of a notification message. And if a viewing instruction of the notification message is received, prompting navigation information from the current position to the stop position. The recording and the prompting of the parking position are beneficial to further embodying the use convenience of the terminal so as to improve the use experience of a user.
A fourth aspect of the present application provides a computer-readable storage medium, where instructions are stored, and when the instructions are executed on a terminal, the terminal is enabled to execute the vehicle-mounted reminding method according to the first aspect of the present application.
A fifth aspect of the present application provides a computer program product containing instructions, which, when run on a terminal, causes the terminal to execute the vehicle-mounted reminder method of the first aspect of the present application.
In one implementation, the method further comprises: and if the terminal determines that the speed is over-speed, sending out an over-speed voice prompt. And the overspeed is actively prompted, so that the driving safety is improved.
In one implementation, the method further comprises: the terminal acquires road condition information, and if the terminal determines that the current road is congested according to the road condition information, a road congestion voice prompt is sent. Further, the terminal replans the driving route according to the destination, and if the terminal confirms that other driving routes except the current driving route exist, the terminal switches to the other driving routes through voice prompt. The prompting of jam and other driving routes is beneficial to further showing the use convenience of the terminal.
In one implementation, the sound data includes at least one of: sound data of a door, sound data of a seat belt.
In one implementation, the sensor data specifically includes at least one of: barometric pressure data, altitude data.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a vehicle-mounted reminding method provided in the present application;
fig. 2 is a schematic structural diagram of a terminal according to an embodiment of the present invention;
FIG. 3 is a flowchart of a vehicle-mounted reminding method disclosed in the embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for determining whether a terminal is in a vehicle-mounted state according to an embodiment of the present disclosure;
fig. 5 is a flowchart for determining the possibility of the operation and control actions according to the preset corresponding relationship between the change value of the sensor data and the operation and control actions disclosed in the embodiment of the present invention;
fig. 6 is a flowchart illustrating a process of identifying whether the vehicle is currently in a vehicle-mounted state according to a possibility of occurrence of a manipulation operation according to an embodiment of the present invention;
fig. 7 is a flowchart illustrating a method for identifying whether the vehicle is currently in a vehicle-mounted state according to a possibility of a manipulation action according to an embodiment of the present invention;
fig. 8 is a flowchart illustrating a method for recognizing whether the vehicle is currently in a vehicle-mounted state according to a possibility of a manipulation according to an embodiment of the present invention.
Detailed Description
Fig. 1 is an application scenario of the vehicle-mounted reminding method provided by the present application, wherein a user carries a terminal, and the terminal disclosed in the embodiment of the present application can automatically recognize a vehicle-mounted state, and actively remind the user with voice in various specific scenarios of the vehicle-mounted state, including that a vehicle owner drives a seated person but the seated person is not yet started, the vehicle is started, the vehicle starts to run, the vehicle is in a running process, and the vehicle stops running.
In the following embodiments of the present application, the mobile terminal shown in fig. 1 is taken as an example, and the mobile terminal may be a mobile phone, a tablet, a wearable device, and the like that a user uses daily.
Fig. 2 is a structure of the terminal shown in fig. 1, including a processor, an input device, an output device, a bus, a sensor, and a recording device. Optionally, memory, I/O subsystems, radio frequency circuits, other input devices, and power supplies may also be included. Those skilled in the art will appreciate that the terminal structure shown in fig. 2 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, some components may be split, or a different arrangement of components. Those skilled in the art will appreciate that the display belongs to a User Interface (UI) and that the terminal may include fewer or more User interfaces than shown.
Specifically, the processor connects various parts of the entire terminal using various interfaces and lines, executes or executes software programs and/or modules, and calls data to perform various functions of the terminal and/or process data. In an embodiment of the present application, the processor is configured to execute the methods shown in fig. 3 and fig. 4, and in particular, in the process of executing the methods shown in fig. 3 and fig. 4, the processor controls the input device to execute input related steps, such as acquiring data, and controls the output device to execute output related steps, such as outputting interactive voice. The input device and the output device are respectively connected with the processor through the bus.
The processor may be formed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs connected to the same or different functions. For example, the Processor may include only a Central Processing Unit (CPU), or may be a combination of a GPU, a Digital Signal Processor (DSP), and a control chip (e.g., a baseband chip) in the communication Unit. In the embodiment of the present invention, the CPU may be a single operation core, or may include multiple operation cores.
The sensor is used for sensing information and converting the sensed information into an electric signal or other required form data output according to a certain rule. In embodiments of the present application, the sensors include, but are not limited to, acceleration sensors, gyroscopes, barometers, gravity sensors, barometric pressure sensors, and location sensors, such as Global Positioning System (GPS) or beidou navigation System.
Recording devices are used to monitor sound in real time and convert it to electrical signals or other data in a desired form. In an embodiment of the present application, the sound recording apparatus may be a Digital Signal Processor (DSP) with low power consumption.
The memory may be used for storing software programs and modules executed or executed by the processor, and the memory mainly includes a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, such as a sound playing program, an image playing program, and the like. The data storage area may store data (such as voice data, a phonebook, etc.) created according to the use of the terminal, and the like. In an embodiment of the present invention, the Memory may include a volatile Memory, such as a Nonvolatile dynamic Random Access Memory (NVRAM), a Phase Change Random Access Memory (PRAM), a Magnetoresistive Random Access Memory (MRAM), and a non-volatile Memory, such as at least one magnetic disk Memory device, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a flash Memory device, such as a NOR flash Memory (NOR flash Memory) or a NAND flash Memory (NAND flash Memory). The non-volatile memory stores an operating system and an application program executed by the processor. The processor loads operating programs and data from the non-volatile memory into memory and stores digital content in a mass storage device. The operating system includes various components and/or drivers for controlling and managing conventional system tasks, such as memory management, storage device control, power management, etc., as well as facilitating communication between various hardware and software components. In the embodiment of the present invention, the operating system may be an Android system developed by Google, an iOS system developed by Apple, a Windows Phone operating system developed by Microsoft, or an embedded operating system such as Vxworks.
The external devices used by the I/O subsystem to control input and output may include other device input controllers, sensor controllers, and display controllers. Alternatively, one or more other input control device controllers receive signals from and/or transmit signals to other input devices, which may include physical buttons (push buttons, rocker buttons, etc.), dials, slide switches, joysticks, click wheels, light mice (which are touch-sensitive surfaces that do not display visual output, or are extensions of a touch-sensitive surface formed by a touch screen). It is noted that other input control device controllers may be connected to any one or more of the above devices. A display controller in the I/O subsystem receives signals from and/or sends signals to a display screen. After the display screen detects the user input, the display controller converts the detected user input into interaction with a user interface object displayed on the display screen, namely, human-computer interaction is realized. The sensor controller may receive signals from and/or transmit signals to one or more sensors.
Other input devices may be used to receive input numeric or character information and generate key signal inputs relating to user settings and function controls of the terminal. In particular, other input devices may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, a light mouse (a light mouse is a touch-sensitive surface that does not display visual output, or is an extension of a touch-sensitive surface formed by a touch screen), and the like. The other input devices are connected with other input device controllers of the I/O subsystem and are in signal interaction with the processor under the control of the other input device controllers.
The display screen may be used to display information input by or provided to the user and various menus of the terminal, and may also accept user input. The display screen may include a display panel and a touch panel. The Display panel may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), and the like. The touch panel, also called a touch screen, a touch sensitive screen, etc., may collect contact or non-contact operations (such as operations performed by a user on or near the touch panel using any suitable object or accessory, such as a finger, a stylus, etc., and may also include somatosensory operations, including operation types such as single-point control operations, multi-point control operations, etc.) on or near the touch panel, and drive the corresponding connection device according to a preset program. Alternatively, the touch panel may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction and gesture of a user, detects signals brought by touch operation and transmits the signals to the touch controller; the touch controller receives touch information from the touch detection device, converts the touch information into information capable of being processed by the processor, sends the information to the processor, and receives and executes commands sent by the processor. In addition, the touch panel may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, a surface acoustic wave, and the like, and may also be implemented by any technology developed in the future. Further, the touch panel may cover the display panel, a user may operate on or near the touch panel covered on the display panel according to the content displayed on the display panel (the display content includes, but is not limited to, a soft keyboard, a virtual mouse, virtual keys, icons, etc.), the touch panel detects the operation on or near the touch panel, and transmits the operation to the processor through the I/O subsystem to determine a user input, and then the processor provides a corresponding visual output on the display panel through the I/O subsystem according to the user input. Although in fig. 2 the touch panel and the display panel are two separate components to implement the input and output functions of the terminal, in some embodiments the touch panel and the display panel may be integrated to implement the input and output functions of the terminal.
The power supply is used to power the various components of the terminal to maintain its operation. As a general understanding, the power source may be a built-in battery, such as a common lithium ion battery, a nickel metal hydride battery, and the like, and also include an external power source that directly supplies power to the electronic device, such as an AC adapter, and the like. In embodiments of the present invention, the power supply may be more broadly defined and may include, for example, a power management system, a charging system, a power failure detection circuit, a power converter or inverter, a power status indicator (e.g., a light emitting diode), and any other components associated with the generation, management, and distribution of electrical energy to the terminals.
The following describes a specific method for the terminal shown in fig. 2 to automatically perform the vehicle-mounted voice interaction in detail.
Fig. 3 is a vehicle-mounted reminding method disclosed in the embodiment of the present application, including the following steps:
s301: sound data is acquired.
Since the vehicle-mounted reminding method is based on recognizing whether the terminal is in the vehicle-mounted state, in the embodiment, the acquired sound data includes sound data generated in a process of mutually switching the vehicle-mounted state and the off-vehicle state, that is, sound data generated in a process of moving the terminal from outside to inside of the vehicle or from inside to outside of the vehicle. In the above process, the door is opened or closed, and the user may fasten or unfasten the seat belt, so that the sound data specifically includes at least one of sound data of the door and sound data of the seat belt.
The mobile terminal being in a vehicle-mounted state may be understood as a state where the mobile terminal is in a vehicle, for example, a user is carried into the vehicle from outside the vehicle. The mobile terminal being in an off-board state may be understood as a state where the mobile terminal is not in a vehicle, for example, after a user extinguishes an engine, the mobile terminal is carried outside the vehicle.
Optionally, the processor may obtain the sound data within a preset time period, for example, 60S, through a sound recording device of the terminal.
S302: sensor data is acquired.
Similarly, the sensor data includes sensor data that changes during switching between the on-vehicle state and the off-vehicle state. During the above process, the barometric pressure and altitude may change, namely: in the process of entering the vehicle from the outside of the vehicle, the altitude of the terminal may change due to the change of the body position of the user. During opening or closing of the door, the air pressure in the vehicle may change. Therefore, in particular, the sensor data includes at least one of barometric pressure data and altitude data. The air pressure data can be acquired through an air pressure sensor, and the altitude data can be acquired through a gravity sensor.
Optionally, the sensor data may also include acceleration data. Specifically, the processor may acquire the sensor data within a preset time period, for example, 60S, through the sensor of the terminal.
Alternatively, S301 and S302 may be alternatively executed.
S303: and determining whether the terminal is in a vehicle-mounted state according to the acquired data, if so, executing S304, and if not, returning to execute S301-S303, and optionally, executing S301-S303 in a preset period.
S304: the voice interaction function is automatically started.
The specific way of starting the voice interaction function is as follows: and sending out inquiry voice, receiving and executing voice instructions.
The inquiry voice comprises at least one of whether an air conditioner is started, whether an audio application is started, whether navigation is started and whether weather is broadcasted. The voice instruction includes a response instruction to ask for voice. For example, a voice of inquiring whether to start navigation is sent, if yes, a voice instruction is executed, namely, an air conditioner in the vehicle is started, and if other voice instructions of the user are received, namely, light is turned on, the vehicle lamp is turned on in response to the voice instruction.
Furthermore, the terminal can also collect sound data and acceleration data of the engine so as to further identify various states in the vehicle-mounted state, such as the states of starting to drive, overspeed during driving and stopping driving, and the terminal can send corresponding prompts according to different states. That is, fig. 3 further includes the following steps:
s305: if the sound data of the engine is not collected, an inquiry voice for judging whether the engine is started is sent.
S306: if the overspeed is determined, an overspeed voice prompt is sent out.
The specific way for determining the overspeed by the terminal can be referred to in the prior art, and is not described herein.
S307: obtaining road condition information, if the current road jam is determined according to the road condition information, sending out a voice prompt of the road jam, further, re-planning a driving route according to a destination, and if other driving routes except the current driving route are determined, switching to other driving routes through the voice prompt. Specifically, the specific manner of switching to other driving routes by voice prompt may be as follows: and sending out voice prompt of other driving routes, and further sending out switching inquiry voice after prompting.
Specifically, the terminal may obtain the traffic information from an installed location information application (e.g., a map application).
S308: and if the time for confirming that the terminal is in the driving state is greater than the preset threshold value, prompting fatigue driving by voice.
Specifically, the terminal may confirm that the vehicle is in the driving state according to the acceleration data (the manner of acquiring the acceleration data may be referred to in the related art), and obtain the time in the driving state using a timer.
The voice prompt fatigue driving comprises at least one mode as follows: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position.
Alternatively, S306-S308 may be performed without collecting navigation sound data. That is, in the case of collecting the navigation sound data, which indicates that the navigation may be started, the corresponding function of the navigation may be used, and the terminal does not remind the user any more, so as to prevent interference caused by repeated reminding.
S309: and if the terminal is determined to be in the stop driving state, recording the parking position.
Specifically, the terminal may determine whether the terminal is in a stopped-running state, i.e., the vehicle is stopped running, based on the acceleration data. Further, the terminal may record the parking position after detecting that the user locks the vehicle. Further, the terminal can display the parking position on the terminal in the form of a notification message, and if a viewing instruction of the notification message is received, the terminal prompts navigation information from the current position to the stop position. The specific prompting mode can be display screen display or voice broadcast.
It should be noted that, in this embodiment, the terminal uses the corresponding sensor to acquire various sensor data, and which sensor acquires which sensor data may refer to the prior art, which is not described herein again.
As can be seen from fig. 3, the terminal can automatically recognize the vehicle-mounted state and automatically perform voice interaction. Furthermore, the voice prompt method and the voice prompt device can give out prompt aiming at specific states in the vehicle-mounted state, so that the terminal can automatically provide voice prompt for a user without being triggered by the user, and the use of the user is facilitated.
Specifically, in this embodiment, the specific process shown in fig. 4 is used to determine whether the terminal is in the vehicle-mounted state, and the method includes the following steps:
s401: and determining the possibility of the control action according to the corresponding relation between the preset sample sound data and the control action and the acquired sound data.
The operation action can be preset, and for the purpose of automatically identifying the vehicle-mounted state based on the method described in the embodiment, as long as the vehicle is manually operated when the vehicle is switched from the vehicle-mounted state to the off-vehicle state (or from the off-vehicle state to the vehicle-mounted state), or the vehicle executes an operation executed by a human and/or an instruction issued by an automatic driving system, the operation can be taken as the preset operation action.
Specifically, the manipulation action may include: opening the door, closing the door, igniting and stopping the tire. Optionally, the manipulation action may further include extinguishing, tire movement, navigation on, navigation off, and the like.
In the preset correspondence relationship between the sound data and the operation and control actions, one kind of sound data corresponds to the operation and control actions generating the sound data, for example, the sound data generated by the driving door corresponds to the driving door.
Specifically, the implementation manner of S401 is: and calculating the similarity between the sample sound data in the preset corresponding relation and the acquired sound data, wherein the similarity is the possibility that the acquired sound data is generated by the control action corresponding to the sample sound data. The acquired sound data is generated by the manipulation action, that is, the occurrence of the manipulation action is described, and therefore, the possibility that the acquired sound data is generated by the manipulation action is the possibility that the manipulation action occurs.
S402: and determining the possibility of the control action according to the preset corresponding relation between the change value of the sample sensor data and the control action and the obtained change value of the sensor data.
In the preset corresponding relationship between the change value of the sample sensor data and the control action, the change value of one type of sensor data corresponds to the control action generating the change value of the sensor data. For example, the change value of the air pressure data generated by closing the car door corresponds to the car door. The change value of the altitude data generated by the user getting on the bus after the driving door corresponds to the driving door.
The specific implementation manner of S402 is: and calculating the similarity between the change value of the sample sensor data in the preset corresponding relation and the change value of the acquired sensor data, wherein the similarity is the possibility that the change value of the acquired sensor data is generated by the control action corresponding to the sample change value. The change value of the acquired sensor data is generated by the manipulation action, that is, the occurrence of the manipulation action is described, and therefore, the possibility that the change value of the acquired sensor data is generated by the manipulation action is the possibility of the occurrence of the manipulation action.
The order of S401 and S402 may be switched, and S401 and S402 may also be executed in parallel.
Taking the operation actions including opening the door, closing the door, igniting, moving the tire and stopping the tire as an example, fig. 5 is a specific way for the processor to execute S401 and S402:
taking sound data as an example, the processor calls a pre-acquired sound classification rule base, wherein the sound classification rule base comprises a corresponding relation between sample sound data and control actions.
The processor compares the acquired sound data with the sample sound data, the similarity between the acquired sound data and the sample sound data corresponding to the opening door in the sound classification rule base is 60%, the similarity between the acquired sound data and the sample sound data corresponding to the closing door is 40%, and the similarity between the acquired sound data and the sample sound data corresponding to the ignition is 5%, and then the possibility that the sound represented by the acquired sound data is generated by the opening door is determined to be 60%, the possibility that the sound is generated by the closing door is determined to be 40%, and the possibility that the sound is generated by the ignition is determined to be 5%. That is, it can be determined from the acquired sound data that the possibility of the door opening occurrence is 60%, the possibility of the door closing occurrence is 40%, and the possibility of the ignition is 5%.
For the convenience of subsequent identification, a preset similarity threshold may be used, and the control action smaller than the similarity threshold is filtered, for example, the similarity between the acquired sound data and the sample sound data corresponding to the tire movement is 1%, and is smaller than the similarity threshold by 5%, and is ignored.
The probability that the change value of the barometric pressure, the change value of the altitude, and the change value of the acceleration are generated by various control actions, that is, the probability that the control action determined by the change value of the barometric pressure, the change value of the altitude, and the change value of the acceleration occurs, can all refer to the sound data as an example, and is detailed in fig. 5, which is not described herein again.
Each classification rule base can be obtained by using the existing big data-based learning and analyzing mode, and the details are not repeated here.
S403: and identifying whether the terminal is currently in a vehicle-mounted state or not according to the possibility of the occurrence of the control action.
In this embodiment, as shown in fig. 6, the possibility of occurrence of the manipulation action is used as an input of the neural network, and an output result of the neural network is that the vehicle is currently in the vehicle-mounted state or the vehicle is not currently in the vehicle-mounted state.
In fig. 6, the input may be the order in which the manipulation operations occur, in addition to the possibility of the manipulation operations occurring. The order in which the manipulation actions occur may be determined in accordance with the time stamps of the audio data. For example, the possibility of recognizing the vehicle-mounted state can be increased by determining the order of occurrence of the manipulation operation as opening the vehicle door, closing the vehicle door, and igniting the vehicle based on the time stamp of the audio data and using the order as input.
Fig. 7 is another implementation manner of S403, which is different from fig. 6 in that the probability of occurrence of a type of control action determined according to the audio data and the sensor data is comprehensively calculated to obtain a comprehensive probability of a type of control action, and the comprehensive probability of the type of control action is used as an input of the neural network.
Of course, fig. 7 may also add as input the order in which the manipulation actions occur. It is also possible to combine the inputs shown in fig. 6 and fig. 7, i.e. the comprehensive possibilities of various types of control actions are used as the input of the neural network, and at the same time, some data having a significant influence on the output result, such as the acceleration data change value, is also used as the input of the neural network. As shown in fig. 8.
Optionally, in the operation of the neural network, different weight values may be set for different control actions, and the weight values may be obtained by learning sample data. The construction of the neural network (including the selection of the number of layers, and the three-layer structure shown in fig. 7-8 is only an example) and the training process can be referred to in the prior art, and will not be described in detail here.
It can be seen from the process shown in fig. 4 that the terminal identifies whether the terminal is in the vehicle-mounted state according to the sound data and the sensor data, and a foundation is laid for the terminal to automatically start the voice prompt in the vehicle-mounted state.
Alternatively, in the embodiment of the present invention, in addition to the mobile terminal shown in fig. 1, the terminal may also be a terminal that is disposed in a vehicle and fixedly connected to the vehicle, for example, a display control module disposed on a center console of the vehicle. In this case, the in-vehicle state is a state in which a terminal provided in the vehicle is turned on, for example, after ignition of the vehicle, the display control module starts to operate. Or, after the display control module monitors that the automobile is unlocked, the display control module operates with low power consumption all the time, continuously monitors whether a user gets on the automobile or not, and the like, and at the moment, the vehicle-mounted state generally refers to a state where the terminal is located in a time period from when the main driving user gets on the automobile to when the main driving user gets off the automobile. After the terminal is turned on, the voice interaction function may be automatically triggered according to the methods shown in fig. 3 and 4.
Specifically, after the terminal is started, the terminal automatically sends out inquiry voice, receives and executes a voice command, and intelligently interacts with a user through voice. And, further, performing at least one of S305-S309.
The embodiment of the application also discloses a terminal which comprises a data acquisition module and a voice control module.
The data acquisition module is used for acquiring data, and the data comprises sound data and/or sensor data. The voice control module is used for automatically starting a voice interaction function if the terminal is determined to be in a vehicle-mounted state according to the data, the vehicle-mounted state is a state that the terminal is in a vehicle, the voice data comprises voice data generated in the mutual switching process of the vehicle-mounted state and the non-vehicle-mounted state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the non-vehicle-mounted state.
For specific functional implementation of the data acquisition module and the voice control module, reference may be made to the above method, and details are not described here.
The terminal can automatically trigger the vehicle-mounted voice interaction function, so that the terminal has more convenient control performance.
In the present specification, each embodiment is described with emphasis on differences from other embodiments, and the same or similar parts between the embodiments may be referred to each other.

Claims (21)

1. A vehicle-mounted reminding method is characterized by comprising the following steps:
the method comprises the steps that a terminal acquires data, wherein the data comprises sound data and/or sensor data;
if the terminal determines that the terminal is in a vehicle-mounted state according to the data, the terminal automatically starts a voice interaction function, the vehicle-mounted state is a state that the terminal is in a vehicle, the voice data comprises voice data generated in a mutual switching process of the vehicle-mounted state and an off-board state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the off-board state; the sensor data specifically includes at least one of: barometric pressure data, altitude data;
the terminal determines whether the terminal is in a vehicle-mounted state according to the data, and the method comprises the following steps:
determining the possibility of the control action according to the corresponding relation between the preset sample sound data and the control action and the acquired sound data; and/or determining the possibility of the control action according to the corresponding relation between the preset change value of the sample sensor data and the control action and the obtained change value of the sensor data; identifying whether the terminal is currently in a vehicle-mounted state or not according to the possibility of occurrence of the control action;
the identifying whether the terminal is currently in a vehicle-mounted state according to the possibility of the occurrence of the control action specifically comprises:
the possibility of occurrence of the control action is used as the input of a neural network, and the output result of the neural network is that the terminal is in a vehicle-mounted state or not in the vehicle-mounted state;
the terminal is used for carrying out active voice reminding on the user when the terminal is in a vehicle-mounted state.
2. The method of claim 1, wherein the initiating a voice interaction function comprises:
sending first inquiry voice, wherein the first inquiry voice comprises at least one of whether to start an air conditioner, whether to start an audio application, whether to start navigation and whether to broadcast weather;
and receiving and executing a first voice instruction, wherein the first voice instruction comprises a response instruction of the first inquiry voice.
3. The method of claim 1 or 2, further comprising:
if the terminal does not acquire the sound data of the engine, sending a second inquiry voice for judging whether the engine is started or not;
receiving a second voice instruction, wherein the second voice instruction comprises a response instruction of the second inquiry voice.
4. The method of claim 1 or 2, further comprising:
and if the terminal determines that the speed is over-speed, sending out an over-speed voice prompt.
5. The method of claim 1 or 2, further comprising:
the terminal acquires road condition information;
and if the terminal determines that the current road is congested according to the road condition information, sending a road congestion voice prompt.
6. The method of claim 5, further comprising:
the terminal replans a driving route according to the destination;
and if the terminal confirms that other driving routes except the current driving route exist, switching to the other driving routes by voice prompt.
7. The method of claim 1 or 2, wherein the sensor data further comprises:
acceleration data;
the method further comprises the following steps:
if the terminal confirms that the time of the terminal in the driving state is greater than a preset threshold value according to the acceleration data, the terminal prompts fatigue driving by voice;
the voice prompt fatigue driving comprises at least one of the following steps: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position.
8. The method of claim 1 or 2, wherein the sensor data further comprises:
acceleration data;
the method further comprises the following steps:
and if the terminal determines that the terminal is in a stop driving state according to the acceleration data, the terminal records a parking position.
9. The method of claim 8, wherein the terminal recording the parking location comprises:
the terminal displays the parking position on the terminal in a form of a notification message;
the method further comprises the following steps:
and if a viewing instruction of the notification message is received, the terminal prompts navigation information from the current position to the stop position.
10. The method of claim 1 or 2, wherein the sound data comprises at least one of:
sound data of a door, sound data of a seat belt.
11. The terminal is characterized in that the terminal is used for carrying out active voice reminding on a user when the terminal is in a vehicle-mounted state; the terminal includes: an input device, an output device, a processor, and a bus; the input device and the output device are respectively connected with the processor through the bus;
the input device is used for acquiring data, and the data comprises sound data and/or sensor data;
the processor is used for automatically starting a voice interaction function if the terminal is determined to be in a vehicle-mounted state according to the data; the vehicle-mounted state is a state that the terminal is in a vehicle, the sound data comprises sound data generated in a mutual switching process of the vehicle-mounted state and the off-board state, and the sensor data comprises sensor data changed in the mutual switching process of the vehicle-mounted state and the off-board state; the sensor data specifically includes at least one of: barometric pressure data, altitude data;
the processor is specifically configured to determine the possibility of the control action according to a preset correspondence between the sample sound data and the control action and the acquired sound data; and/or determining the possibility of the control action according to the corresponding relation between the preset change value of the sample sensor data and the control action and the obtained change value of the sensor data; identifying whether the terminal is currently in a vehicle-mounted state or not according to the possibility of occurrence of the control action;
the identifying whether the terminal is currently in a vehicle-mounted state according to the possibility of occurrence of the control action specifically includes: the possibility of occurrence of the control action is used as the input of a neural network, and the output result of the neural network is that the terminal is in a vehicle-mounted state or not in the vehicle-mounted state;
the output device is used for outputting interactive voice according to the instruction of the processor.
12. The terminal of claim 11, wherein the processor configured to initiate a voice interaction function comprises:
the processor instructs the output device to send out a first inquiry voice, wherein the first inquiry voice comprises at least one of whether to start an air conditioner, whether to start an audio application, whether to start navigation and whether to broadcast weather;
the input device is further used for receiving a first voice instruction, wherein the first voice instruction comprises a response instruction of the first inquiry voice;
the processor is further configured to execute the first voice instruction.
13. The terminal according to claim 11 or 12,
the output equipment is also used for sending out a second inquiry voice for judging whether the engine is started or not if the sound data of the engine is not collected by the input equipment;
the input device is further configured to receive a second voice instruction, where the second voice instruction includes a response instruction of the second query voice.
14. The terminal according to claim 11 or 12,
the output device is further configured to issue an overspeed voice prompt if the processor determines that overspeed is occurring.
15. The terminal according to claim 11 or 12, wherein the processor is further configured to obtain traffic information;
the output device is further used for sending a road congestion voice prompt if the processor determines that the current road is congested according to the road condition information.
16. The terminal of claim 15, wherein the processor is further configured to re-plan a travel route based on a destination;
the output device is further configured to, if the processor confirms that there are other driving routes other than the current driving route, switch to the other driving routes with a voice prompt.
17. A terminal according to claim 11 or 12, wherein the sensor data further comprises: acceleration data;
the output device is further used for prompting fatigue driving by voice if the processor confirms that the time of the terminal in the driving state is greater than a preset threshold value according to the acceleration data;
the voice prompt fatigue driving comprises at least one of the following steps: voice prompt rest, voice inquiry whether to play music, and voice prompt of the distance between the front service area and the current position.
18. A terminal according to claim 11 or 12, wherein the sensor data further comprises: acceleration data;
the processor is further used for determining that the terminal is in a stop driving state according to the acceleration data; and if the terminal is determined to be in the stop driving state, recording a parking position.
19. The terminal of claim 18,
the output device is further used for displaying the parking position on the terminal in the form of a notification message;
the input device is further used for acquiring an instruction for viewing the notification message;
and the output equipment is also used for prompting navigation information from the current position to the stop position according to the instruction.
20. The terminal according to claim 11 or 12, wherein the sound data comprises at least one of: sound data of a door, sound data of a seat belt.
21. A computer-readable storage medium having instructions stored therein, which when run on a terminal, cause the terminal to perform the vehicle-mounted reminder method according to any one of claims 1 to 10.
CN201780005848.5A 2016-12-27 2017-06-14 Vehicle-mounted reminding method and terminal Active CN108476263B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN2016112250365 2016-12-27
CN201611225036 2016-12-27
PCT/CN2017/088250 WO2018120666A1 (en) 2016-12-27 2017-06-14 On-board prompting method and terminal

Publications (2)

Publication Number Publication Date
CN108476263A CN108476263A (en) 2018-08-31
CN108476263B true CN108476263B (en) 2021-02-12

Family

ID=62707828

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201780005848.5A Active CN108476263B (en) 2016-12-27 2017-06-14 Vehicle-mounted reminding method and terminal

Country Status (2)

Country Link
CN (1) CN108476263B (en)
WO (1) WO2018120666A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110929074A (en) * 2018-08-31 2020-03-27 长城汽车股份有限公司 Vehicle-mounted voice broadcasting method and system
CN111159510A (en) * 2018-11-08 2020-05-15 奇酷互联网络科技(深圳)有限公司 Bus stop inquiry method based on intelligent terminal, intelligent terminal and device
CN109377115A (en) * 2018-12-19 2019-02-22 Oppo广东移动通信有限公司 Vehicular applications recommended method, device, terminal device and storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8542218B2 (en) * 2010-08-19 2013-09-24 Hyundai Motor Company Electronic switch apparatus for vehicle
CN104660787B (en) * 2013-11-20 2017-08-25 昆山研达电脑科技有限公司 The method for aiding in user's driving communication
CN104754106A (en) * 2013-12-27 2015-07-01 富泰华工业(深圳)有限公司 Vehicle-mounted telephone control system and voice control method thereof
CN103973887A (en) * 2014-04-17 2014-08-06 刘崇庆 Application method for novel portable vehicle-mounted instrument terminal
CN105227746A (en) * 2014-06-09 2016-01-06 中兴通讯股份有限公司 Mobile terminal driving model control method, device and mobile terminal
CN104580690A (en) * 2014-11-25 2015-04-29 深圳市金立通信设备有限公司 Onboard mode control method
JP6354541B2 (en) * 2014-11-26 2018-07-11 株式会社デンソー Vehicle remote control system, portable communication device
CN104679588B (en) * 2015-03-18 2019-02-05 联想(北京)有限公司 A kind of Working mode switching method and electronic equipment

Also Published As

Publication number Publication date
WO2018120666A1 (en) 2018-07-05
CN108476263A (en) 2018-08-31

Similar Documents

Publication Publication Date Title
JP6983198B2 (en) Post-driving summary with tutorial
CN108665678B (en) Rescue requesting method and device
EP2348394B1 (en) Map display system, map display method, and computer-readable storage medium
US11893227B2 (en) Automated pacing of vehicle operator content interaction
KR101495190B1 (en) Image display device and operation method of the image display device
CN108476263B (en) Vehicle-mounted reminding method and terminal
US20160336009A1 (en) In-vehicle control apparatus and in-vehicle control method
KR20140136799A (en) Image display apparatus and operation method of the same
CN107364444A (en) Vehicle console device
CN105283356A (en) Program, method, and device for controlling application, and recording medium
CN108430819A (en) Car-mounted device
JP5920463B2 (en) POSITION INFORMATION TRANSMISSION DEVICE AND POSITION INFORMATION TRANSMISSION SYSTEM
CN109002270A (en) A kind of multi-display method and system, car-mounted terminal
CN114629991A (en) Prompting method and device based on vehicle connection
JP5456818B2 (en) Navigation server, navigation client and navigation system
CN109976515B (en) Information processing method, device, vehicle and computer readable storage medium
JP2016097928A (en) Vehicular display control unit
CN113835570B (en) Control method, device, equipment, storage medium and program for display screen in vehicle
CN112734520B (en) Order receiving control method and device for commercial vehicle, electronic equipment and storage medium
WO2014087523A1 (en) Electronic apparatus
CN111885559A (en) Intelligent device searching method, vehicle-mounted device system and searching device
WO2017193311A1 (en) Intelligent reminding method and apparatus for vehicle
CN110910517B (en) Bus sectional pricing system
CN104019826A (en) Automatic navigation method and system based on touch control
KR20100079091A (en) Navigation system and method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant