CN110231863B - Voice interaction method and vehicle-mounted equipment - Google Patents

Voice interaction method and vehicle-mounted equipment Download PDF

Info

Publication number
CN110231863B
CN110231863B CN201810184727.8A CN201810184727A CN110231863B CN 110231863 B CN110231863 B CN 110231863B CN 201810184727 A CN201810184727 A CN 201810184727A CN 110231863 B CN110231863 B CN 110231863B
Authority
CN
China
Prior art keywords
voice
user
voice interaction
icon
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810184727.8A
Other languages
Chinese (zh)
Other versions
CN110231863A (en
Inventor
耿梦娇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201810184727.8A priority Critical patent/CN110231863B/en
Publication of CN110231863A publication Critical patent/CN110231863A/en
Application granted granted Critical
Publication of CN110231863B publication Critical patent/CN110231863B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides a voice interaction method and vehicle-mounted equipment, wherein the method comprises the following steps: if receiving the interactive voice of the user, determining the position of the user in the vehicle; and adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the position of the user in the vehicle to indicate that the user is using the voice interactive service. In the scheme, as the display effect of the voice interaction icon which is the same object is only required to be adjusted, namely when the user at different positions in the car uses the voice interaction service, the display effect of the voice interaction icon changes along with the change of the position of the user, so that each user in the car can easily perceive who uses the voice interaction service, and the use interactivity of the voice interaction service is improved.

Description

Voice interaction method and vehicle-mounted equipment
Technical Field
The invention relates to the technical field of internet, in particular to a voice interaction method and vehicle-mounted equipment.
Background
In order to facilitate the driver to conveniently use the required service during driving and to improve driving safety, many automobiles are currently provided with a voice interaction service (commonly called a voice assistant), so that a user in the automobile, such as the driver, can use the required functions, such as vehicle-mounted entertainment, navigation functions or certain functions provided by the voice interaction service, by performing voice interaction with the voice interaction service.
In order to increase interactivity, when the voice interaction service is used, a user avatar and a voice interaction icon are displayed on a user interface of the voice interaction service, for example, when a main driver uses the voice interaction service, the user avatar of the main driver is displayed on the user interface; when the assistant driver uses the voice interaction service, the head portrait of the main driver is displayed on the user interface.
For the display of the avatar of the user, on one hand, interactivity can be increased, and on the other hand, the avatar of the user can be used for distinguishing which user uses the voice interaction service, but in practice, the avatar of the user may need to be set by the user, which brings inconvenience to the user using the voice interaction service, and in addition, the avatar of the user of different people may be the same, or even if the avatar of the user is different, the person in the vehicle cannot easily and intuitively perceive who uses the voice interaction service, because the person in the vehicle needs to know the avatar of the user corresponding to each person.
Disclosure of Invention
In view of this, embodiments of the present invention provide a voice interaction method and a vehicle-mounted device, so as to improve the use interactivity of a voice interaction service.
In a first aspect, an embodiment of the present invention provides a voice interaction method, including:
if receiving the interactive voice of the user, determining the position of the user in the vehicle;
and adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the position, wherein the display effect represents that the user uses the voice interaction service.
In a second aspect, an embodiment of the present invention provides a voice interaction apparatus, including:
the determining module is used for determining the position of the user in the vehicle if the interactive voice of the user is received;
and the processing module is used for adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the position, wherein the display effect represents that the user uses the voice interaction service.
In a third aspect, an embodiment of the present invention provides an in-vehicle device, which includes a processor and a memory, where the memory is configured to store one or more computer instructions, and when the one or more computer instructions are executed by the processor, the voice interaction method in the first aspect is implemented. The electronic device may also include a communication interface for communicating with other devices or a communication network.
An embodiment of the present invention provides a computer storage medium, configured to store a computer program, where the computer program enables a computer to implement the voice interaction method in the first aspect when executed.
According to the voice interaction method and the vehicle-mounted equipment provided by the embodiment of the invention, when a certain user in the vehicle triggers the interactive voice using the voice interaction service, the position of the user in the vehicle is determined, so that the display effect of the voice interaction icon in the corresponding icon display area is adjusted according to the position of the user in the vehicle to indicate that the user uses the voice interaction service. In the scheme, as only the display effect of the voice interaction icon which is the same object needs to be adjusted, namely when the user at different positions in the car uses the voice interaction service, the display effect of the voice interaction icon changes along with the change of the position of the user, each user in the car can easily perceive who uses the voice interaction service, the use interactivity of the voice interaction service is improved, moreover, the configuration of the head portrait of the user does not need to be supported, and the use of the voice interaction service is simplified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of a first voice interaction method according to an embodiment of the present invention;
FIGS. 2a to 2c are schematic diagrams illustrating display effects of voice interaction icons;
fig. 3 is a flowchart of a second voice interaction method according to an embodiment of the present invention;
fig. 4 is an interface schematic diagram of a voice interaction method provided in an application scenario according to an embodiment of the present invention;
fig. 5 is an interface schematic diagram of a voice interaction method provided in an embodiment of the present invention in another application scenario;
fig. 6 is a flowchart of a third embodiment of a voice interaction method according to the present invention;
FIG. 7 is a schematic view of an interface corresponding to the embodiment shown in FIG. 6;
FIG. 8 is a schematic structural diagram of a voice interaction apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an in-vehicle device corresponding to the voice interaction apparatus provided in the embodiment shown in fig. 8.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that the term "and/or" as used herein is merely a relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B, may represent: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein may be interpreted as "at \8230; \8230whenor" when 8230; \8230when or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrases "comprising one of \8230;" does not exclude the presence of additional like elements in an article or system comprising the element.
In addition, the sequence of steps in each method embodiment described below is only an example and is not strictly limited.
Fig. 1 is a flowchart of a first voice interaction method according to an embodiment of the present invention, where the voice interaction method according to this embodiment may be executed by a voice interaction service installed in a vehicle-mounted device, or executed by the voice interaction service and a matched hardware. As shown in fig. 1, the method comprises the steps of:
101. and if the interactive voice of the user is received, determining the position of the user in the vehicle.
102. And adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the position of the user in the vehicle, wherein the display effect indicates that the user uses the voice interactive service.
The voice interaction icon is a control inherent in the voice interaction service, and may be implemented as a graphic with a certain size and a certain shape, such as a circle illustrated in fig. 2 a. In addition, as shown in fig. 2a, after the voice interaction service is started, an icon display area corresponding to the voice interaction icon is displayed in a user interface, and the voice interaction icon is displayed in the corresponding icon display area.
In practical applications, when a user wants to use a voice interactive service, a corresponding interactive voice may be output. When the voice interaction service receives the interaction voice output by the user, the position of the user in the vehicle is determined, and then the display effect of the voice interaction icon in the corresponding icon display area is adjusted to be matched with the position of the user in the vehicle.
In order to determine the position of the user outputting the interactive voice in the vehicle, an alternative implementation is: when the user at each seat in the vehicle is allowed to use the voice interaction service, a voice input/output device such as a microphone may be provided at each seat, so that, when the user at a certain seat inputs the above-mentioned interaction voice through the voice input/output device at the seat, the installation position corresponding to the current voice input/output device may be acquired as the position of the user outputting the interaction voice in the vehicle according to the preset corresponding relationship between the voice input/output device and the installation position, i.e., the seat. It can be understood that the interactive voice received by the voice interaction service may carry an identifier characterizing a corresponding voice input/output device, so that the voice interaction service can know which voice input/output device transmits the interactive voice, and thus, the correspondence relationship is queried to determine the user location.
In fact, in addition to the localization of the position of the user in the vehicle based on the above-described manner, the localization of the position may be achieved using, for example, a time difference of arrival localization method, a sound source localization method based on a microphone array, or the like, wherein the microphone array may be provided at the vehicle-mounted device.
After the position of the user in the vehicle is determined, the display effect of the voice interaction icon in the corresponding icon display area is adjusted according to the position to indicate that the user uses the voice interaction service.
In an alternative embodiment, the adjustment process may be implemented as: an icon display position corresponding to a position of the user in the vehicle is determined in the corresponding icon display area, so that the voice interactive icon is displayed at the icon display position, as shown in fig. 2 a. In this embodiment, only by taking an example of allowing the primary driver and the secondary driver to use the voice interaction service, assuming that the primary driver seat is located on the left front side of the vehicle and the secondary driver seat is located on the right front side of the vehicle, in fig. 2a, assuming that the user currently triggering the interactive voice is located on the primary driver, it may be determined that the icon display position at this time is located at a position in the left half area of the corresponding icon display area, and the position may be set in advance, that is, the icon display position corresponding to the primary driver seat is set in advance, so that when the position of the user is determined to correspond to the primary driver seat, the corresponding icon display position is determined based on the preset position.
It should be noted that, when the user at each seat in the vehicle is allowed to use the voice interaction service, the corresponding icon display area may be divided into a plurality of sub-areas in advance, the distribution of the sub-areas corresponds to the distribution of the seats with respect to the vehicle, and the icon display position may be set in advance in each sub-area, for example, the center position of each sub-area is set as the icon display position. Alternatively, in order to enable the users in the vehicle to intuitively see which user is currently using the voice interaction service, the boundaries of the sub-areas may be explicitly presented, for example, by highlighting the boundary lines with a certain color line.
In practical application, by default, users at all seats in the vehicle are allowed to use the voice interaction service, so that when the voice interaction service is initially set, the users can input the vehicle type to enable the voice interaction service to automatically generate the corresponding icon display area corresponding to the vehicle type and finish the division of the sub-areas.
In the above alternative embodiment, the display position of the voice interactive icon is adjusted to match the position of the user in the vehicle.
In addition, in another alternative embodiment, the adjusting process of the display effect of the voice interactive icon can be implemented as follows: the voice interactive icon is displayed in the center of the corresponding icon display area, and the voice interactive icon is adjusted to a display animation that tends to the position of the user in the vehicle. In the embodiment, the main point is to adjust the form of the voice interactive icon to match with the position of the user in the vehicle.
Still taking the example of fig. 2a that the primary driver and the secondary driver can use the voice interaction service as an example, assuming that the determined position of the user corresponds to the primary driver seat, optionally, as shown in fig. 2b, the voice interaction icon may be displayed at the central position of the corresponding icon display area, and since the primary driver seat corresponds to the left half sub-area of the corresponding icon display area, the voice interaction icon may be adjusted to exhibit a dynamic effect of fluctuating towards the sound source position, i.e. the left half sub-area; or, optionally, as shown in fig. 2c, a part of the boundary line of the voice interactive icon corresponding to the left half sub-area may be adjusted to present a moving effect extending to the left half sub-area.
Based on the embodiment, when users at different positions in the vehicle use the voice interaction service, only the same object, namely the display effect of the voice interaction icon, needs to be adjusted, namely the display effect of the voice interaction icon is adjusted to be correspondingly changed along with the change of the position of the user, so that each user in the vehicle can easily perceive who uses the voice interaction service, and the use interactivity of the voice interaction service is improved.
The following describes the voice interaction method provided by the embodiment of the present invention with reference to an alternative practical application scenario shown in fig. 3.
Fig. 3 is a flowchart of a second voice interaction method according to an embodiment of the present invention, and as shown in fig. 3, the method may include the following steps:
301. if the awakening voice of the first user awakening the voice interaction service is received and the voice interaction service is in the non-awakened state, determining a first position of the first user in the vehicle, and placing the voice interaction service in the awakened state.
It can be understood that when the user does not start using the voice interaction service, the voice interaction service is in a closed state, and when the user needs to use the voice interaction service, the user needs to wake up the voice interaction service or can also be called to turn on the voice interaction service, just as when people want to use an APP installed in a mobile phone, the user needs to turn on the APP first and then use the APP.
In an alternative embodiment, the user wakes up the voice interaction service by voice, that is, the user outputs a wake-up voice for waking up the voice interaction service when the user needs to use the voice interaction service. For convenience of description and to distinguish from what is required to be introduced in the subsequent steps, the user who outputs the wake-up voice is referred to as a first user, and the position of the first user in the vehicle is referred to as a first position. The process of determining the first position adjustment may refer to the description in the foregoing embodiments, which are not described herein again.
In addition, it can be understood that, when the current voice interaction service is already woken up by a certain user, at this time, it is redundant if the first user outputs the wake-up voice again, because the voice interaction service is already in a wake-up state at this time, after receiving the wake-up voice of the first user, it is necessary to determine whether to respond to the wake-up voice by combining the state of the voice interaction service. Specifically, after receiving the wake-up voice, if the voice interaction service is found to be in an un-wake state, the steps of determining the first position of the first user in the vehicle and placing the voice interaction service in a wake-up state are performed in response to the wake-up voice, and conversely, if the voice interaction service is in a wake-up state, the wake-up voice is ignored.
302. And adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the first position.
The step of adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the first position may refer to the description in the foregoing embodiments, which is not repeated herein.
However, alternatively, the display effect of the voice interactive icon may reflect the state of the voice interactive service, i.e. in the awakened state, such as making the voice interactive icon take on a specific color to indicate that the voice interactive service is currently in the awakened state.
303. And if the use voice of the second user using the voice interaction service is received, determining a second position of the second user in the vehicle.
304. And adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the second position.
After the first user wakes up the voice interaction service, in an optional application scenario, any user in the vehicle can use the voice interaction service, the any user is called a second user, and the position of the second user in the vehicle is called a second position. It will be appreciated that the second user may be the same user as the first user, or may be a different user.
When the second user uses the voice interaction service, the second user can output a use voice, wherein the use voice is a voice for commanding the voice interaction service to execute a certain function, such as a voice for starting a navigation function, and the voice can comprise a navigation starting and stopping address; such as for searching for an address, the voice of a song, etc.
After receiving the usage voice output by the second user, the voice interaction service determines a second position of the second user in the vehicle, and adjusts the display effect of the voice interaction icon in the corresponding icon display area according to the second position, where the determination process of the second position and the adjustment process of the display effect of the voice interaction icon may also be referred to the descriptions in the foregoing embodiments, and are not described herein again.
However, it is to be understood that the voice interaction service may perform corresponding functions, such as searching for a song, triggering navigation path planning, etc., in response to the second user's voice usage. On the other hand, it can be understood that when the second user is not the same user as the first user, the display effect of the voice interactive icon changes, i.e., adaptively changes as the position of the user changes.
For example, as shown in fig. 4, assuming that the first user is different from the second user, after the first user outputs the wake-up voice, the voice interaction icon is displayed at a position on the left side of the corresponding icon display area corresponding to the first position of the first user; when the second user outputs the use voice, the voice interactive icon is displayed at a position on the right side of the corresponding icon display area corresponding to the second position of the second user; assuming that the first user continues to output the usage voice again after the assumption, the voice interaction icon is switched to be displayed at a position on the left side of the corresponding icon display area corresponding to the first position of the first user at this time.
As mentioned above, when the voice interactive service is in the awakened state, the voice interactive icon may be made to present a color corresponding to the awakened state to indicate that the voice interactive service is currently in the awakened state. In addition, optionally, when the second user is outputting the usage voice, the voice interaction service is listening/receiving the usage voice, in order to enable the user to visually see that the voice interaction service is listening to the usage voice of the second user, the voice interaction icon may be caused to exhibit a corresponding fluctuation effect according to a voice characteristic of the usage voice, for example, a fluctuation effect corresponding to a voice signal strength is superimposed on the voice interaction icon. That is, when the voice interactive service is in the awakened state, in the process of receiving the voice of the user, the display effect of the voice interactive icon may present a color corresponding to the awakened state and/or a fluctuation effect according to the voice characteristic of the voice of the user, in addition to the effect corresponding to the position of the user.
In the above optional application scenario, after the first user wakes up the voice interaction service, any user in the vehicle may use the voice interaction service. However, there may be another optional application scenario in which the first user wakes up the voice interaction service and then can only continue to use the voice interaction service by the first user unless the first user finishes using the voice interaction service, for example, the first user clicks an exit button or does not input a usage voice for a certain time, and in this application scenario, the first user and the second user are different users.
Therefore, in the application scenario, after receiving the usage voice of the second user and determining the second position of the second user in the vehicle, the method may further include the following steps:
determining whether the second user and the first user are the same user or not according to the second position, and if the second user and the first user are the same user, keeping the display effect of the voice interactive icon unchanged; if not, ignoring the used voice, namely discarding the received used voice. It will be appreciated that if the second location is the same as the first location of the first user, it is the same user, and vice versa, it is a different user.
It should be noted that, when the display effect of the voice icon may include not only an effect adapted to the position of the user but also an effect reflecting the voice feature of the user, the keeping of the display effect of the voice icon means keeping the display effect of the voice icon adapted to the first position of the first user unchanged, because the display effect reflecting the voice feature of the first user may be superimposed at this time.
To intuitively understand the application scenario, as shown in fig. 5, assume that the first user wakes up the voice interaction service, and at this time, corresponding to the first position of the first user, the voice interaction icon is displayed at a position on the left side of the corresponding icon display area; after the second user outputs the using voice, if the second position of the second user is found to be the same as the first position of the first user, the second user and the first user are the same user, at this time, the voice interactive icon is still displayed at the left position, and a sound wave effect corresponding to the signal intensity of the using voice is superposed on the voice interactive icon, otherwise, if the second position of the second user is found to be different from the first position of the first user, the second user and the first user are not the same user, at this time, the voice interactive icon can still be displayed at the left position, and the position is switched to the right position of the corresponding icon display area without responding to the second position, so that the user can intuitively perceive that the voice interactive service does not respond to the using voice of the second user.
In conclusion, the display effect of the voice interaction icon changes along with the position of the sound source, so that each user in the car can easily perceive who is conversing with the voice interaction service, and the display effect of the voice interaction icon changes along with the position of the sound source, and the man-machine interaction is further improved.
Fig. 6 is a flowchart of a third embodiment of the web page building method provided in the embodiment of the present invention, and as shown in fig. 6, the method may include the following steps:
601. and responding to the operation of awakening the voice interaction service triggered by the user through a key, and displaying the voice interaction icon in the center of the corresponding icon display area.
602. The location of the user in the vehicle is determined in response to the user using speech using the speech interaction service.
603. And adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the position of the user in the vehicle.
The foregoing embodiment mentions that the voice interaction service may be woken up in a voice interaction manner, but alternatively, the voice interaction service may also be woken up in a key manner, where the key may be a physical key or a virtual key, such as: corresponding physical keys can be arranged on the vehicle-mounted equipment, and when a user presses the physical keys, the voice interaction service is awakened; or adding a shortcut icon of the voice interaction service on a screen desktop of the vehicle-mounted device, and clicking the shortcut icon by the user to wake up the voice interaction service.
At this time, since it is not possible to recognize who is the voice interactive service awakened, the voice interactive icon may be displayed in the center of the corresponding icon display area, and then the display effect of the voice interactive icon in the corresponding icon display area may be adjusted according to the position of the user in the vehicle based on the determination of the position of the user outputting the usage voice in the vehicle. As shown in fig. 7, when the voice interactive service is awakened by pressing a key, the voice interactive icon is displayed in the center of the corresponding icon display area, and then, assuming that the usage voice output by the person in the main driver seat is determined, the voice interactive icon is adjusted to be displayed at a certain position in the left half area of the corresponding icon display area; and if the used voice output by the person in the passenger seat is determined, adjusting the voice interactive icon to be displayed at a position in the right half sub-area of the corresponding icon display area. Thereby changing the display position of the voice interactive icon in response to a change in the position of the user.
The voice interaction apparatus of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that the voice interaction devices can be constructed by configuring the steps taught in the present solution using commercially available hardware components.
Fig. 8 is a schematic structural diagram of a voice interaction apparatus according to an embodiment of the present invention, as shown in fig. 8, the apparatus includes: a determining module 11 and a processing module 12.
And the determining module 11 is used for determining the position of the user in the vehicle if the interactive voice of the user is received.
And the processing module 12 is configured to adjust a display effect of the voice interaction icon in the corresponding icon display area according to the position, where the display effect indicates that the user is using the voice interaction service.
Optionally, the processing module 12 may be configured to:
determining an icon display position corresponding to the position in the corresponding icon display area, and displaying the voice interaction icon at the icon display position; or,
displaying the voice interaction icon in the center of the corresponding icon display area, and adjusting the voice interaction icon to a display animation that tends toward the position.
Optionally, the determining module 11 may be configured to: and if the interactive voice input by the user through the voice input and output equipment in the vehicle is received, acquiring the installation position corresponding to the voice input and output equipment as the position according to the preset corresponding relation between the voice input and output equipment and the installation position.
Optionally, the determining module 11 includes: the first determining unit 111 is configured to determine a first location of a first user in a vehicle if a wake-up voice is received, where the first user wakes up the voice interaction service, and the voice interaction service is in an un-wake-up state.
Accordingly, the processing module 12 comprises: the first processing unit 121 is configured to adjust a display effect of the voice interaction icon in the corresponding icon display area according to the first position, and place the voice interaction service in an awakened state.
Optionally, the first processing unit 121 may be further configured to: and if the voice interaction service is in the awakened state, ignoring the awakening voice.
Optionally, the determining module 11 further includes: a second determining unit 112, configured to determine a second location of the second user in the vehicle if a usage voice of the second user using the voice interaction service is received.
Accordingly, the processing module 12 comprises: and the second processing unit 122 is configured to adjust a display effect of the voice interaction icon in the corresponding icon display area according to the second position.
Wherein the display effect further comprises: the voice interactive icon presents a color corresponding to the awakened state and/or presents a fluctuation effect according to the voice characteristics of the used voice.
Optionally, the processing module 12 may further include: a third processing unit 123, configured to determine whether the second user and the first user are the same user according to the second position, and if the second user and the first user are the same user, keep the display effect unchanged; and if the user is not the same user, ignoring the used voice.
Optionally, the processing module 12 is further configured to: responding to the operation of awakening the voice interaction service triggered by the user through a key, and displaying the voice interaction icon in the center of the corresponding icon display area.
Accordingly, the determining module 11 is further configured to: and if receiving the use voice of the user using the voice interaction service, determining the position of the user in the vehicle.
The apparatus shown in fig. 8 can execute the method of the embodiments shown in fig. 1 to fig. 6, and parts not described in detail in this embodiment can refer to the related description of the foregoing embodiments, which are not described again here.
Having described the internal functions and structure of the voice interaction apparatus, in one possible design, the structure of the voice interaction apparatus may be implemented as an onboard device, as shown in fig. 9, which may include: a processor 21 and a memory 22. Wherein the memory 22 is used for storing a program for supporting the voice interaction apparatus to execute the voice interaction method provided in the embodiments shown in fig. 1 to 6, and the processor 21 is configured to execute the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
if receiving the interactive voice of the user, determining the position of the user in the vehicle;
and adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the position, wherein the display effect represents that the user uses the voice interaction service.
Optionally, the processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1 to 6.
The voice interaction device may further include a communication interface 23 for the voice interaction device to communicate with other devices or a communication network.
In addition, an embodiment of the present invention provides a computer storage medium for storing computer software instructions for a voice interaction apparatus, which includes a program for executing the voice interaction method in the method embodiments shown in fig. 1 to fig. 6.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by a necessary general hardware platform, and may also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of voice interaction, comprising:
if receiving the interactive voice of the user, determining the position of the user in the vehicle;
adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the position, wherein the display effect represents that the user uses the voice interaction service;
the method for adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the position comprises the following steps: and determining an icon display position corresponding to the position in the corresponding icon display area, and displaying the voice interaction icon at the icon display position.
2. The method of claim 1, wherein the adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the position further comprises:
displaying the voice interaction icon in the center of the corresponding icon display area, and adjusting the voice interaction icon to a display animation that tends toward the position.
3. The method of claim 2, wherein determining the location of the user in the vehicle if the interactive voice of the user is received comprises:
and if the interactive voice input by the user through the voice input and output equipment in the vehicle is received, acquiring the installation position corresponding to the voice input and output equipment as the position according to the preset corresponding relation between the voice input and output equipment and the installation position.
4. The method of claim 2, wherein determining the location of the user in the vehicle if the interactive voice of the user is received comprises:
if a wake-up voice for a first user to wake up the voice interaction service is received and the voice interaction service is in an un-wake-up state, determining a first position of the first user in a vehicle;
the adjusting the display effect of the voice interaction icon in the corresponding icon display area according to the position comprises:
adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the first position;
the method further comprises the following steps: placing the voice interaction service in an awakened state.
5. The method of claim 4, wherein after receiving a wake-up voice for the first user to wake up the voice interaction service, further comprising:
and if the voice interaction service is in the awakened state, ignoring the awakening voice.
6. The method of claim 4, wherein adjusting the display effect of the voice interaction icon in the corresponding icon display region according to the first position further comprises:
if receiving the use voice of the second user using the voice interaction service, determining a second position of the second user in the vehicle;
and adjusting the display effect of the voice interactive icon in the corresponding icon display area according to the second position.
7. The method of claim 6, wherein the displaying the effect further comprises: the voice interactive icon presents a color corresponding to the awakened state and/or presents a fluctuation effect according to the voice characteristics of the used voice.
8. The method of claim 6, further comprising:
determining whether the second user and the first user are the same user according to the second position;
if the users are the same, keeping the display effect unchanged;
and if the user is not the same user, ignoring the used voice.
9. The method of any of claims 1-3, wherein prior to determining the user's location in the vehicle, further comprising:
responding to the operation of awakening the voice interaction service triggered by the user through a key, and displaying the voice interaction icon in the center of the corresponding icon display area;
if receiving the interactive voice of the user, determining the position of the user in the vehicle, including:
and if receiving the use voice of the user using the voice interaction service, determining the position of the user in the vehicle.
10. An in-vehicle apparatus, characterized by comprising a memory and a processor; wherein the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the voice interaction method of any of claims 1 to 9.
CN201810184727.8A 2018-03-06 2018-03-06 Voice interaction method and vehicle-mounted equipment Active CN110231863B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810184727.8A CN110231863B (en) 2018-03-06 2018-03-06 Voice interaction method and vehicle-mounted equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810184727.8A CN110231863B (en) 2018-03-06 2018-03-06 Voice interaction method and vehicle-mounted equipment

Publications (2)

Publication Number Publication Date
CN110231863A CN110231863A (en) 2019-09-13
CN110231863B true CN110231863B (en) 2023-03-24

Family

ID=67862192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810184727.8A Active CN110231863B (en) 2018-03-06 2018-03-06 Voice interaction method and vehicle-mounted equipment

Country Status (1)

Country Link
CN (1) CN110231863B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110171372B (en) * 2019-05-27 2020-12-15 广州小鹏汽车科技有限公司 Interface display method and device of vehicle-mounted terminal and vehicle
CN112977294A (en) * 2019-12-13 2021-06-18 北京车和家信息技术有限公司 Display method and device applied to vehicle-mounted display
CN111880693A (en) * 2020-07-03 2020-11-03 芜湖雄狮汽车科技有限公司 Method and system for editing position of application program list of automobile display screen
CN112309395A (en) * 2020-09-17 2021-02-02 广汽蔚来新能源汽车科技有限公司 Man-machine conversation method, device, robot, computer device and storage medium
CN112365891B (en) * 2020-10-30 2024-06-21 东风汽车有限公司 Vehicle-machine virtual voice assistant interaction method for vehicle through screen, electronic equipment and storage medium
CN113851126A (en) * 2021-09-22 2021-12-28 思必驰科技股份有限公司 In-vehicle voice interaction method and system
CN115534850B (en) * 2022-11-28 2023-05-16 北京集度科技有限公司 Interface display method, electronic device, vehicle and computer program product

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035696A (en) * 2013-03-04 2014-09-10 观致汽车有限公司 Display method and device of vehicle-mounted message center on touch display interface
CN106104677A (en) * 2014-03-17 2016-11-09 谷歌公司 Visually indicating of the action that the voice being identified is initiated
CN106168951A (en) * 2015-05-20 2016-11-30 三星电子株式会社 Electronic installation and control method thereof
CN106415467A (en) * 2013-10-29 2017-02-15 大众汽车有限公司 Device and method for adapting content of status bar

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101464773A (en) * 2007-12-19 2009-06-24 神基科技股份有限公司 Method and computer system for displaying program execution window along with user position
CN103187060A (en) * 2011-12-28 2013-07-03 上海博泰悦臻电子设备制造有限公司 Vehicle-mounted speech processing device
CN103593081B (en) * 2012-08-17 2017-11-07 上海博泰悦臻电子设备制造有限公司 The control method of mobile unit and phonetic function
CN103885743A (en) * 2012-12-24 2014-06-25 大陆汽车投资(上海)有限公司 Voice text input method and system combining with gaze tracking technology
CN104076916B (en) * 2013-03-29 2017-05-24 联想(北京)有限公司 Information processing method and electronic device
KR20150109937A (en) * 2014-03-21 2015-10-02 현대자동차주식회사 Method for controlling multi source and multi display
CN103838487B (en) * 2014-03-28 2017-03-29 联想(北京)有限公司 A kind of information processing method and electronic equipment
JP5968578B2 (en) * 2014-04-22 2016-08-10 三菱電機株式会社 User interface system, user interface control device, user interface control method, and user interface control program
CN104332159B (en) * 2014-10-30 2017-05-10 上海修源网络科技有限公司 Human-computer interaction method and device for vehicle-mounted voice operating system
CN105786293A (en) * 2014-12-19 2016-07-20 大陆汽车投资(上海)有限公司 Self-adaption user interface display method and vehicle-mounted system
US11816325B2 (en) * 2016-06-12 2023-11-14 Apple Inc. Application shortcuts for carplay
US20170357521A1 (en) * 2016-06-13 2017-12-14 Microsoft Technology Licensing, Llc Virtual keyboard with intent-based, dynamically generated task icons
CN107122179A (en) * 2017-03-31 2017-09-01 阿里巴巴集团控股有限公司 The function control method and device of voice
CN107680591A (en) * 2017-09-21 2018-02-09 百度在线网络技术(北京)有限公司 Voice interactive method, device and its equipment based on car-mounted terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104035696A (en) * 2013-03-04 2014-09-10 观致汽车有限公司 Display method and device of vehicle-mounted message center on touch display interface
CN106415467A (en) * 2013-10-29 2017-02-15 大众汽车有限公司 Device and method for adapting content of status bar
CN106104677A (en) * 2014-03-17 2016-11-09 谷歌公司 Visually indicating of the action that the voice being identified is initiated
CN106168951A (en) * 2015-05-20 2016-11-30 三星电子株式会社 Electronic installation and control method thereof

Also Published As

Publication number Publication date
CN110231863A (en) 2019-09-13

Similar Documents

Publication Publication Date Title
CN110231863B (en) Voice interaction method and vehicle-mounted equipment
US11676601B2 (en) Voice assistant tracking and activation
US9787812B2 (en) Privacy management
WO2017118270A1 (en) Vehicle-mounted hmi adjustment method, vehicle-mounted terminal and storage medium
KR102312210B1 (en) User interface for accessing a set of functions, method and computer readable storage medium for providing a user interface for accessing a set of functions
CN110874202B (en) Interaction method, device, medium and operating system
CN106739946B (en) Starting method and device of automobile air conditioner
US9956939B2 (en) Platform for wireless interaction with vehicle
CN110308961B (en) Theme scene switching method and device of vehicle-mounted terminal
US20140267035A1 (en) Multimodal User Interface Design
US20170286785A1 (en) Interactive display based on interpreting driver actions
CN102774321B (en) Vehicle-mounted system and sound control method thereof
US11126391B2 (en) Contextual and aware button-free screen articulation
CN106157955A (en) A kind of sound control method and device
US10369943B2 (en) In-vehicle infotainment control systems and methods
KR20150089660A (en) The System and Method for Booting the Application of the Terminal
CN104598124A (en) Method and system for regulating field angle of head-up display device
CN112735411A (en) Control method, client, vehicle, voice system, and storage medium
CN112172705A (en) Vehicle-mounted intelligent hardware management and control method based on intelligent cabin and intelligent cabin
CN104717349A (en) Display method of terminal user interface and terminal
JP2015051759A (en) System and method for suppressing sound generated by vehicle during vehicle start operation
CN109358928A (en) In the method, apparatus and mobile unit of the desktop presentation data of mobile unit
CN110400582A (en) A kind of audio frequency controller method, audio management system and onboard system
CN113002449B (en) Control method and device of vehicle-mounted HMI (human machine interface) equipment
CN116204253A (en) Voice assistant display method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Ltd.

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

GR01 Patent grant
GR01 Patent grant