CN109521932A - Voice control display processing method, device, vehicle, storage medium and equipment - Google Patents

Voice control display processing method, device, vehicle, storage medium and equipment Download PDF

Info

Publication number
CN109521932A
CN109521932A CN201811311181.4A CN201811311181A CN109521932A CN 109521932 A CN109521932 A CN 109521932A CN 201811311181 A CN201811311181 A CN 201811311181A CN 109521932 A CN109521932 A CN 109521932A
Authority
CN
China
Prior art keywords
state
voice
voice control
display
quick
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811311181.4A
Other languages
Chinese (zh)
Inventor
宁悦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zebra Network Technology Co Ltd
Original Assignee
Zebra Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zebra Network Technology Co Ltd filed Critical Zebra Network Technology Co Ltd
Priority to CN201811311181.4A priority Critical patent/CN109521932A/en
Publication of CN109521932A publication Critical patent/CN109521932A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of voice control display processing method, device, vehicle, storage medium and equipment.Voice control display processing method provided by the invention, it include: first to obtain voice activation instruction, it is instructed further according to the voice activation and the display state of the voice control is switched into state of activation from interactive state, the interactive state is the voice control default display state, then, when the voice control is under the state of activation, the input pattern of the terminal switches to voice input pattern.Voice control display processing method provided by the invention, can be realized by way of activating voice control quickly triggering voice, and can by be arranged voice control concrete image, Lai Zengqiang during interactive voice with the mutual innervation between user.

Description

Voice control display processing method and device, vehicle, storage medium and equipment
Technical Field
The invention relates to the technical field of vehicle-mounted voice control, in particular to a voice control display processing method, a voice control display processing device, a vehicle, a storage medium and equipment.
Background
Along with the continuous development of scientific technology, the vehicle-mounted equipment is more and more intelligent, and great convenience is brought to the life of people. In the aspect of vehicle-mounted device control, an intelligent operating system is generally installed at present, and voice control is mostly realized for controlling the operating system.
In the existing vehicle-mounted control system, voice control is used as common application, and a voice touch screen inlet is arranged in an application center.
In addition, for the implementation mode of setting voice in the application center, since the voice needs to be triggered through the fixed application entrance, the voice does not interact with the user, and clear feedback cannot be given according to different voice using modes of the user, the interestingness is lacked, and the voice triggering operation is very inconvenient if the voice entrance is set too deep.
Disclosure of Invention
The invention provides a voice control display processing method, a voice control display processing device, a vehicle, a storage medium and equipment, so that a user can quickly trigger voice and the interactive feeling with the user is enhanced in the voice interaction process.
In a first aspect, the present invention provides a method for processing a display of a voice control, including: by executing a software application on a processor of a terminal and displaying a graphical user interface on a touch screen of the terminal, graphical content displayed by the graphical user interface including at least one controllable voice control, the method comprising:
acquiring a voice activation instruction;
switching the display state of the voice control from an interaction state to an activation state according to the voice activation instruction, wherein the interaction state is the default display state of the voice control;
and when the voice control is in the activated state, the input mode of the terminal is switched to a voice input mode.
In one possible design, the obtaining the voice activation instruction includes:
acquiring a touch activation signal acting on the voice control;
or,
and acquiring a preset voice activation signal for awakening the voice control.
In one possible design, the obtaining a preset voice activation signal for waking up the voice control includes:
acquiring a preset functional quick awakening word sound signal;
or,
and acquiring a preset emotional quick awakening word sound signal.
In one possible design, after the obtaining the preset functional shortcut wake-up word tone signal, the method further includes:
acquiring a first functional quick awakening word according to the functional quick awakening word sound signal;
determining a first functional animation corresponding to the first functional quick awakening word according to the first functional quick awakening word and a preset quick word dynamic effect database;
and displaying the first function animation.
In one possible design, after the obtaining the preset emotional swift wake-up word tone signal, the method further includes:
acquiring a first emotional quick awakening word according to the emotional quick awakening word sound signal;
determining a first emotion animation corresponding to the first emotional quick awakening word according to the first emotional quick awakening word and a preset quick word action database;
displaying the first emotional animation.
In one possible design, after the obtaining of the voice activation instruction to switch the display state of the voice control from the interaction state to the activation state, the method further includes:
judging whether the voice activation instruction conflicts with a current display page or not;
and if so, switching the display state of the voice control from the activation state to a disappearance state.
In one possible design, before the obtaining the voice activation instruction, the method further includes:
acquiring a touch dragging signal acting on the voice control so as to switch the display state of the voice control from the interactive state to a static state;
and moving the voice control from a first position of the touch screen to a second position according to the touch dragging signal, wherein the first position is an initial position of the touch dragging signal, and the second position is a termination position of the touch dragging signal.
In one possible design, after the moving the voice control from the first position to the second position of the touch screen according to the touch drag signal, the method further includes:
and if the preset time length is exceeded and the touch screen receives an input instruction, switching the display state of the voice control from the static state to a fade-out state.
In a second aspect, the present invention further provides a speech control display processing apparatus, including: the method comprises the following steps:
the instruction acquisition module is used for acquiring a voice activation instruction;
the control switching module is used for switching the display state of the voice control from an interactive state to an activated state according to the voice activation instruction, wherein the interactive state is the default display state of the voice control;
and the input switching module is used for switching the input mode of the terminal to the voice input mode when the voice control is in the activated state.
In one possible design, the instruction fetch module includes:
the touch instruction acquisition submodule is used for acquiring a touch activation signal acting on the voice control;
or,
and the voice instruction acquisition submodule acquires a preset voice activation signal for awakening the voice control.
In one possible design, the voice instruction obtaining sub-module is specifically configured to:
acquiring a preset functional quick awakening word sound signal;
or,
and acquiring a preset emotional quick awakening word sound signal.
In one possible design, the speech control display processing apparatus further includes:
the awakening word acquisition module is used for acquiring a first functional quick awakening word according to the functional quick awakening word sound signal;
the animation determining module is used for determining a first functional animation corresponding to the first functional quick awakening word according to the first functional quick awakening word and a preset quick word dynamic effect database;
and the animation display module is used for displaying the first functional animation.
In one possible design, the awakening word acquisition module is further configured to acquire a first emotional quick awakening word according to the emotional quick awakening word sound signal;
the animation determining module is used for determining a first emotional animation corresponding to the first emotional quick awakening word according to the first emotional quick awakening word and a preset quick word action database;
and the animation display module is used for displaying the first emotion animation.
In one possible design, the animation display module is configured to determine whether the voice activation instruction conflicts with a currently displayed page;
the control switching module is further configured to switch the display state of the voice control from the activated state to a disappearing state.
In one possible design, the instruction obtaining module is further configured to obtain a touch dragging signal acting on the voice control, so that the display state of the voice control is switched from the interactive state to a static state;
the animation display module is further configured to move the voice control from a first position to a second position of the touch screen according to the touch dragging signal, where the first position is an initial position of the touch dragging signal, and the second position is a termination position of the touch dragging signal.
In a possible design, the input switching module is configured to switch the display state of the voice control from the static state to a dissolve state if the touch screen receives an input instruction after a preset duration.
In a third aspect, the present invention further provides a storage medium having a computer program stored thereon, where the computer program is executed by a processor to implement any one of the possible speech control display processing methods in the first aspect.
In a fourth aspect, the present invention provides an electronic device, including:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the possible speech control display processing methods of the first aspect via execution of the executable instructions.
In a fifth aspect, the present invention also provides a vehicle comprising: the electronic device recited in the fourth aspect.
According to the voice control display processing method, the voice activation instruction is obtained firstly, then the display state of the voice control is switched to the activation state from the interaction state according to the voice activation instruction, and when the voice control is in the activation state, the input mode of the terminal is switched to the voice input mode, so that the voice is triggered quickly by activating the voice control, and the interaction feeling between the voice control and a user in the voice interaction process can be enhanced by setting the specific image of the voice control.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram illustrating an application scenario for a speech control display processing method according to an exemplary embodiment of the present invention;
FIG. 2 is a diagram of a display interface of the terminal in the application scenario shown in FIG. 1;
FIG. 3 is a flowchart illustrating a method of speech control display processing in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a display interface diagram of the terminal in the embodiment shown in FIG. 3;
FIG. 5 is a flowchart illustrating a method of speech control display processing in accordance with another illustrative embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method of speech control display processing in accordance with yet another illustrative embodiment of the present invention;
FIG. 7 is a diagram of a display interface illustrating dragging a voice control in a terminal according to an illustrative embodiment of the present invention;
FIG. 8 is a schematic diagram of the speech control display state switching logic of the present invention;
FIG. 9 is a schematic structural diagram of a speech control display processing apparatus according to an exemplary embodiment of the present invention;
FIG. 10 is a block diagram of an instruction fetch module in the embodiment of FIG. 9;
fig. 11 is a schematic structural diagram of a speech control display processing apparatus according to another exemplary embodiment of the present invention;
fig. 12 is a schematic structural diagram of an electronic device shown in accordance with an exemplary embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is an application scenario diagram illustrating a speech control display processing method according to an exemplary embodiment of the present invention. As shown in fig. 1, the voice control display processing method of this embodiment may be applied to a vehicle 1, where a terminal 2 is disposed on the vehicle 1, and the terminal 2 may be a car machine device.
Fig. 2 is a display interface diagram of the terminal in the application scenario shown in fig. 1. As shown in fig. 2, a graphical user interface displayed on a touch screen of the terminal 2 and executing a relevant software application on a processor on the terminal 2 may include various application icons, such as a photographing application, a mapping application, a multimedia application, a game application, and the like, in the graphical user interface of the terminal 2.
In addition, the graphic content displayed on the graphic user interface at least comprises a controllable voice control, the voice control can be displayed in the graphic user interface in a floating window mode, and the voice control is various graphic images, such as cartoon characters, animal images, water drop images and the like. In addition, the voice control can be used as an entrance for voice calling, a user can trigger voice by clicking the voice control, and the voice control can bear various graphic images and display the voice images.
Fig. 3 is a flowchart illustrating a method of speech control display processing according to an exemplary embodiment of the invention. As shown in fig. 3, the method for processing display of a voice control provided in this embodiment includes:
step 101, acquiring a voice activation instruction.
With continued reference to fig. 2, in order to avoid the method of blocking the key information of the page by the floating window of the voice control, the voice control is usually adsorbed in the adsorption areas on the left and right sides. When the user needs to trigger the voice, the voice input application can be activated by clicking the touch screen or inputting a wake-up word.
Specifically, the obtaining of the voice activation instruction may be obtaining a touch activation signal acting on the voice control, or obtaining a preset voice activation signal for waking up the voice control, and is not specifically limited in this embodiment.
The voice control module is used for acquiring a preset voice activation signal for waking up a voice control, acquiring a preset functional quick awakening word tone signal and acquiring a preset emotional quick awakening word tone signal, wherein the functional quick awakening word tone signal can be a photographing voice signal, a map voice signal is opened, a music voice signal is played and the like, and the emotional quick awakening word tone signal can be an open-expression voice signal and an open-expression voice signal.
And 102, switching the display state of the voice control from the interactive state to the activated state according to the voice activation instruction.
Specifically, after the corresponding voice activation instruction is acquired, the display state of the voice control can be switched from the interactive state to the activation state according to the voice activation instruction, wherein the interactive state is the default display state of the voice control.
The interactive state is a normal state of the voice control, and when the voice control is embodied as an animal, such as a zebra, the interactive state can be a state that the zebra is eating grass. It should be noted that the specific form of the interaction state is not limited in this embodiment.
And for the activation state being a state with visual feedback after the voice function is activated, when the voice control is shaped as an animal, such as zebra, the interaction state may be a state in which zebra is running. It should be noted that the specific form of the activated state is not limited in this embodiment.
Fig. 4 is a display interface diagram of the terminal in the embodiment shown in fig. 3. Referring to fig. 2 and 4, when the voice control is a cartoon character, the interactive state may be a state in which the cartoon character is quiet, and the interactive state may be a state in which the cartoon character is speaking.
And 103, when the voice control is in an activated state, switching the input mode of the terminal to a voice input mode.
Specifically, when the voice control is in an activated state, the input mode of the terminal is switched to the voice input mode. At this time, the user can realize control over the vehicle-mounted device by inputting voice, such as playing music, opening a map, taking a picture, performing voice interaction, and the like.
In the embodiment, the voice activation instruction is obtained first, then the display state of the voice control is switched to the activation state from the interaction state according to the voice activation instruction, and when the voice control is in the activation state, the input mode of the terminal is switched to the voice input mode, so that the voice is rapidly triggered by activating the voice control, and the interaction feeling between the voice control and a user in the voice interaction process can be enhanced by setting the specific image of the voice control.
Fig. 5 is a flowchart illustrating a speech control display processing method according to another exemplary embodiment of the present invention. As shown in fig. 5, the method for processing display of a voice control provided in this embodiment includes:
step 201, obtaining a first functional quick awakening word according to the functional quick awakening word sound signal.
Specifically, the first functional shortcut wake-up word may be obtained according to the functional shortcut wake-up word tone signal, and the setting of the functional shortcut wake-up word tone signal may be as shown in table one:
step 202, switching the display state of the voice control from the interactive state to the activated state according to the voice activation instruction.
Specifically, after the corresponding voice activation instruction is acquired, the display state of the voice control can be switched from the interactive state to the activation state according to the voice activation instruction, wherein the interactive state is the default display state of the voice control.
The interactive state is a normal state of the voice control, and when the voice control is embodied as an animal, such as a zebra, the interactive state can be a state that the zebra is eating grass. It should be noted that the specific form of the interaction state is not limited in this embodiment.
And for the activation state being a state with visual feedback after the voice function is activated, when the voice control is shaped as an animal, such as zebra, the interaction state may be a state in which zebra is running. It should be noted that the specific form of the activated state is not limited in this embodiment.
And step 203, when the voice control is in the activated state, switching the input mode of the terminal to a voice input mode.
Specifically, when the voice control is in an activated state, the input mode of the terminal is switched to the voice input mode. At this time, the user can realize control over the vehicle-mounted device by inputting voice, such as playing music, opening a map, taking a picture, performing voice interaction, and the like.
And 204, determining a first functional animation corresponding to the first functional quick awakening word according to the first functional quick awakening word and a preset quick word dynamic effect database.
According to the corresponding relation in the table one, the first functional animation corresponding to the first functional quick awakening word can be determined according to the first functional quick awakening word and a preset quick word action database. For example, when the user inputs a wake word of "close mouth and close mouth", the animation of the first function corresponds to the animation of "goodbye".
And step 205, displaying the first function animation.
After the first functional animation corresponding to the first functional quick awakening word is determined according to the first functional quick awakening word and the preset quick word dynamic effect database, the first functional animation can be displayed according to the acquired first functional animation.
In this embodiment, a first functional quick wake-up word is obtained according to a functional quick wake-up word sound signal, then the display state of the voice control is switched from the interactive state to the activated state according to the voice activation instruction, and when the voice control is in the activated state, the input mode of the terminal is switched to the voice input mode, and animation display is performed after a first functional animation corresponding to the first functional quick wake-up word is determined according to the first functional quick wake-up word and a preset quick word dynamic effect database, so that the interactive sense with a user in the voice interaction process can be enhanced, and the user experience is improved.
FIG. 6 is a flowchart illustrating a method of speech control display processing according to yet another exemplary embodiment of the invention. As shown in fig. 6, the method for processing display of a voice control provided in this embodiment includes:
301, acquiring a first emotional quick awakening word according to the emotional quick awakening word sound signal.
Specifically, the first functional shortcut wake-up word can be obtained according to the functional shortcut wake-up word tone signal, and the setting of the functional shortcut wake-up word tone signal can be as shown in table two:
awakening word classes Emotional quick awakening word Functional animation
Happy Give grandpa one Laugh with Chinese character of' Xiao
Is not easy to worry You are too bulky Cry
And step 302, switching the display state of the voice control from the interactive state to the activated state according to the voice activation instruction.
Specifically, after the corresponding voice activation instruction is acquired, the display state of the voice control can be switched from the interactive state to the activation state according to the voice activation instruction, wherein the interactive state is the default display state of the voice control.
The interactive state is a normal state of the voice control, and when the voice control is embodied as an animal, such as a zebra, the interactive state can be a state that the zebra is eating grass. It should be noted that the specific form of the interaction state is not limited in this embodiment.
And for the activation state being a state with visual feedback after the voice function is activated, when the voice control is shaped as an animal, such as zebra, the interaction state may be a state in which zebra is running. It should be noted that the specific form of the activated state is not limited in this embodiment.
And 303, when the voice control is in an activated state, switching the input mode of the terminal to a voice input mode.
Specifically, when the voice control is in an activated state, the input mode of the terminal is switched to the voice input mode. At this time, the user can realize control over the vehicle-mounted device by inputting voice, such as playing music, opening a map, taking a picture, performing voice interaction, and the like.
And step 304, determining a first emotion animation corresponding to the first emotional quick awakening word according to the first emotional quick awakening word and a preset quick word action database.
And determining a first emotion animation corresponding to the first emotional quick awakening word according to the corresponding relation in the second table and the preset quick word action database. For example, when the user inputs a wakeup word of 'laughing' the user, the first function animation corresponds to the animation of 'laughing'.
Step 305, displaying the first emotion animation.
After the first functional animation corresponding to the first emotional quick awakening word is determined according to the first emotional quick awakening word and the preset quick word action database, the first functional animation can be displayed according to the acquired first emotional animation.
In the embodiment, the first emotional quick awakening word is obtained according to the emotional quick awakening word sound signal, then the display state of the voice control is switched to the activated state from the interactive state according to the voice activation instruction, when the voice control is in the activated state, the input mode of the terminal is switched to the voice input mode, animation display is carried out after the first emotional animation corresponding to the first emotional quick awakening word is determined according to the first emotional quick awakening word and the preset quick word dynamic effect database, therefore, the interactive sense with a user in the voice interaction process can be enhanced, and the user experience is improved.
On the basis of the above embodiment, in order to enable the user to adjust the displayed position of the voice control according to the page display characteristics, in this embodiment, the voice control may also move according to the touch drag signal of the user. Fig. 7 is a diagram illustrating a display interface for dragging a voice control in a terminal according to an exemplary embodiment of the present invention. As shown in fig. 7, a touch screen of the terminal obtains a touch dragging signal acting on the voice control, so that a display state of the voice control is switched from an interactive state to a static state, and the voice control is moved from a first position to a second position of the touch screen according to the touch dragging signal, where the first position is an initial position of the touch dragging signal, and the second position is an end position of the touch dragging signal. Furthermore, after the voice control part is dragged, the dragged voice control part can be adsorbed in the adsorption areas on the left side and the right side, so that the displayed page cannot be shielded.
For better illustration, the following describes the operation mode switching logic of the voice control in this embodiment. Fig. 8 is a schematic diagram of a logic for switching a display state of a voice control in the present invention, and as shown in fig. 8, the display state of the voice control in this embodiment mainly includes: the system comprises an interaction state, an activation state, a blanking state, a static state and a disappearance state, wherein the interaction state is a normal state of a voice control, the activation state is a state of hitting visual feedback of a voice quick wake-up word, the blanking state is a state that a page immersion state does not influence information display, the static state is the static state of the interaction state, and the disappearance state is a state that is not displayed after being mutually exclusive with certain pages according to hierarchy and scene definition. For switching between the above states, the following is explained in detail:
and (3) switching the interaction state to an activation state: when the voice control is triggered by voice or touch, the interactive state is switched to the activated state.
The active state is switched to the extinguished state: whether the voice activation instruction conflicts with the current display page or not is judged, and if the judgment result is yes, the display state of the voice control is switched from the activation state to the disappearance state. For example, voice activated instructions in a game interface: "invite teammates", "fight again", "view score list" and "play song guess list", etc.
And (3) switching the interaction state to a disappearance state: firstly, judging whether the interaction state conflicts with the current display page, and if so, switching the display state of the voice control from the interaction state to the disappearance state.
The active state is switched to the quiescent state: and when the voice control is dragged or the activation state is ended, switching from the interactive state to the disappearing state.
The static state is switched to the active state: when the voice control is triggered by voice or touch, the voice control is switched from the static state to the active state.
The static state is switched to a disappearing state: whether the static state conflicts with the current display page or not is judged, and if the judgment result is yes, the display state of the voice control is switched from the static state to the disappearance state.
The interactive state is switched to a blanking state: if the preset duration is exceeded, for example, 10 seconds, and the touch screen receives an input instruction, the display state of the voice control is switched from the interactive state to the fade-out state.
The static state is switched to a blanking state: and when the preset time length is exceeded, for example, 5 seconds, and the touch screen receives an input instruction, switching the display state of the voice control from the static state to the fade-out state.
The blanking state is switched to the active state: when the voice control is triggered by voice or touch, the voice control is switched from the blanking state to the active state.
Blanking state is switched to disappearance state: whether the blanking state conflicts with the current display page or not is judged, and if the judgment result is yes, the display state of the voice control is switched from the blanking state to the disappearance state.
And (3) switching the interaction state to the static state: and when the voice control is dragged, switching from the interactive state to the static state.
Fig. 9 is a schematic structural diagram of a speech control display processing apparatus according to an exemplary embodiment of the present invention. As shown in fig. 9, the speech control display processing apparatus provided in this embodiment includes:
an instruction obtaining module 401, configured to obtain a voice activation instruction;
a control switching module 402, configured to switch a display state of a voice control from an interaction state to an activation state according to the voice activation instruction, where the interaction state is a default display state of the voice control;
an input switching module 403, configured to switch an input mode of the terminal to a voice input mode when the voice control is in the activated state.
FIG. 10 is a block diagram of an instruction fetch module in the embodiment of FIG. 9. The instruction obtaining module 401 includes:
the touch instruction acquisition submodule 4011 is configured to acquire a touch activation signal acting on the voice control;
or,
the voice instruction obtaining sub-module 4012 obtains a preset voice activation signal for waking up the voice control.
In one possible design, the voice command obtaining sub-module 4012 is specifically configured to:
acquiring a preset functional quick awakening word sound signal;
or,
and acquiring a preset emotional quick awakening word sound signal.
On the basis of the embodiment shown in fig. 9, fig. 11 is a schematic structural diagram of a speech control display processing apparatus according to another exemplary embodiment of the present invention. As shown in fig. 11, the apparatus for displaying and processing a voice control provided in this embodiment further includes:
a wake-up word obtaining module 404, configured to obtain a first functional fast wake-up word according to the functional fast wake-up word sound signal;
an animation determining module 405, configured to determine a first functional animation corresponding to the first functional quick wakeup word according to the first functional quick wakeup word and a preset quick word dynamic effect database;
and an animation display module 406, configured to display the first functional animation.
In one possible design, the awakening word acquisition module is further configured to acquire a first emotional quick awakening word according to the emotional quick awakening word sound signal;
the animation determining module 405 is configured to determine a first emotional animation corresponding to the first emotional quick wakeup word according to the first emotional quick wakeup word and a preset quick word action database;
the animation display module 406 is configured to display the first emotional animation.
In one possible design, the animation display module 406 is configured to determine whether the voice activation instruction conflicts with a currently displayed page;
the control switching module 402 is further configured to switch the display state of the voice control from the activated state to a disappearing state.
In a possible design, the instruction obtaining module 401 is further configured to obtain a touch dragging signal acting on the voice control, so that the display state of the voice control is switched from the interactive state to a static state;
the animation display module 406 is further configured to move the voice control from a first position to a second position of the touch screen according to the touch dragging signal, where the first position is an initial position of the touch dragging signal, and the second position is a termination position of the touch dragging signal.
In a possible design, the input switching module 403 is configured to switch the display state of the voice control from the static state to a fade-out state if the touch screen receives an input instruction after a preset time period.
It should be noted that the information pushing apparatus in the embodiments shown in fig. 9 to 11 may be used to execute the method in the embodiments shown in fig. 3 to 8, and the specific implementation manner and the technical effect are similar and will not be described again here.
An embodiment of the present invention further provides a vehicle, including: any one of the information pushing devices in the embodiments shown in fig. 4-6.
Fig. 12 is a schematic structural diagram of an electronic device shown in accordance with an exemplary embodiment of the present invention. As shown in fig. 12, the electronic device provided in this embodiment includes:
a processor 501 and a memory 502; wherein:
the memory 502, which may also be a flash memory, is used to store the computer program.
And a processor 501 for executing the execution instructions stored in the memory to implement the steps of the method. Reference may be made in particular to the description relating to the preceding method embodiment.
Alternatively, the memory 502 may be separate or integrated with the processor 501.
When the memory 502 is a device independent of the processor 501, the electronic apparatus may further include:
a bus 503 for connecting the memory 502 and the processor 501.
The present embodiment also provides a program product comprising a computer program stored in a readable storage medium. The computer program can be read from a readable storage medium by at least one processor of the electronic device, and the execution of the computer program by the at least one processor causes the electronic device to implement the methods provided by the various embodiments described above.
An embodiment of the present invention further provides a vehicle, including: such as the electronic device in the embodiment shown in fig. 12.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (12)

1. A method for processing voice control display is characterized by comprising the following steps: by executing a software application on a processor of a terminal and displaying a graphical user interface on a touch screen of the terminal, graphical content displayed by the graphical user interface including at least one controllable voice control, the method comprising:
acquiring a voice activation instruction;
switching the display state of the voice control from an interaction state to an activation state according to the voice activation instruction, wherein the interaction state is the default display state of the voice control;
and when the voice control is in the activated state, the input mode of the terminal is switched to a voice input mode.
2. The method according to claim 1, wherein the obtaining of the voice activation instruction includes:
acquiring a touch activation signal acting on the voice control;
or,
and acquiring a preset voice activation signal for awakening the voice control.
3. The method according to claim 2, wherein the obtaining a preset voice activation signal for waking up the voice control comprises:
acquiring a preset functional quick awakening word sound signal;
or,
and acquiring a preset emotional quick awakening word sound signal.
4. The method of claim 3, wherein after the obtaining the preset functional shortcut wake-up word tone signal, the method further comprises:
acquiring a first functional quick awakening word according to the functional quick awakening word sound signal;
determining a first functional animation corresponding to the first functional quick awakening word according to the first functional quick awakening word and a preset quick word dynamic effect database;
and displaying the first function animation.
5. The method of claim 3, wherein after the obtaining the preset emotional shortcut wakening word tone signal, further comprising:
acquiring a first emotional quick awakening word according to the emotional quick awakening word sound signal;
determining a first emotion animation corresponding to the first emotional quick awakening word according to the first emotional quick awakening word and a preset quick word action database;
displaying the first emotional animation.
6. The method according to any one of claims 1 to 3, wherein after the obtaining of the voice activation instruction to switch the display state of the voice control from the interactive state to the active state, the method further includes:
judging whether the voice activation instruction conflicts with a current display page or not;
and if so, switching the display state of the voice control from the activation state to a disappearance state.
7. The method according to claim 6, further comprising, before the obtaining the voice activation instruction:
acquiring a touch dragging signal acting on the voice control so as to switch the display state of the voice control from the interactive state to a static state;
and moving the voice control from a first position of the touch screen to a second position according to the touch dragging signal, wherein the first position is an initial position of the touch dragging signal, and the second position is a termination position of the touch dragging signal.
8. The method of claim 7, wherein after the moving the voice control from the first position to the second position of the touch screen according to the touch drag signal, the method further comprises:
and if the preset time length is exceeded and the touch screen receives an input instruction, switching the display state of the voice control from the static state to a fade-out state.
9. A speech control display processing apparatus, comprising: the method comprises the following steps:
the instruction acquisition module is used for acquiring a voice activation instruction;
the control switching module is used for switching the display state of the voice control from an interactive state to an activated state according to the voice activation instruction, wherein the interactive state is the default display state of the voice control;
and the input switching module is used for switching the input mode of the terminal to the voice input mode when the voice control is in the activated state.
10. A storage medium on which a computer program is stored, the program, when executed by a processor, implementing the speech control display processing method of any one of claims 1 to 8.
11. An electronic device, comprising:
a processor; and the number of the first and second groups,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the speech control display processing method of any of claims 1-8 via execution of the executable instructions.
12. A vehicle, characterized by comprising: an electronic device as claimed in claim 11.
CN201811311181.4A 2018-11-06 2018-11-06 Voice control display processing method, device, vehicle, storage medium and equipment Pending CN109521932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811311181.4A CN109521932A (en) 2018-11-06 2018-11-06 Voice control display processing method, device, vehicle, storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811311181.4A CN109521932A (en) 2018-11-06 2018-11-06 Voice control display processing method, device, vehicle, storage medium and equipment

Publications (1)

Publication Number Publication Date
CN109521932A true CN109521932A (en) 2019-03-26

Family

ID=65773069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811311181.4A Pending CN109521932A (en) 2018-11-06 2018-11-06 Voice control display processing method, device, vehicle, storage medium and equipment

Country Status (1)

Country Link
CN (1) CN109521932A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960537A (en) * 2019-03-29 2019-07-02 北京金山安全软件有限公司 Interaction method and device and electronic equipment
CN112346621A (en) * 2019-08-08 2021-02-09 北京车和家信息技术有限公司 Virtual function button display method and device
CN114489420A (en) * 2022-01-14 2022-05-13 维沃移动通信有限公司 Voice information sending method and device and electronic equipment

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423870A (en) * 2013-09-10 2015-03-18 北京三星通信技术研究有限公司 Control in graphical user interface, display method as well as method and device for operating control
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107516533A (en) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 A kind of session information processing method, device, electronic equipment
US20180052571A1 (en) * 2016-08-16 2018-02-22 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN107861626A (en) * 2017-12-06 2018-03-30 北京光年无限科技有限公司 The method and system that a kind of virtual image is waken up
CN107992263A (en) * 2017-12-19 2018-05-04 维沃移动通信有限公司 A kind of information sharing method and mobile terminal
CN108109622A (en) * 2017-12-28 2018-06-01 武汉蛋玩科技有限公司 A kind of early education robot voice interactive education system and method
CN108459880A (en) * 2018-01-29 2018-08-28 出门问问信息科技有限公司 voice assistant awakening method, device, equipment and storage medium
CN108564945A (en) * 2018-03-13 2018-09-21 斑马网络技术有限公司 Vehicle-mounted voice control method and device and electronic equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104423870A (en) * 2013-09-10 2015-03-18 北京三星通信技术研究有限公司 Control in graphical user interface, display method as well as method and device for operating control
US20180052571A1 (en) * 2016-08-16 2018-02-22 Lg Electronics Inc. Mobile terminal and method for controlling the same
CN106445460A (en) * 2016-10-18 2017-02-22 渡鸦科技(北京)有限责任公司 Control method and device
CN107329990A (en) * 2017-06-06 2017-11-07 北京光年无限科技有限公司 A kind of mood output intent and dialogue interactive system for virtual robot
CN107516533A (en) * 2017-07-10 2017-12-26 阿里巴巴集团控股有限公司 A kind of session information processing method, device, electronic equipment
CN107861626A (en) * 2017-12-06 2018-03-30 北京光年无限科技有限公司 The method and system that a kind of virtual image is waken up
CN107992263A (en) * 2017-12-19 2018-05-04 维沃移动通信有限公司 A kind of information sharing method and mobile terminal
CN108109622A (en) * 2017-12-28 2018-06-01 武汉蛋玩科技有限公司 A kind of early education robot voice interactive education system and method
CN108459880A (en) * 2018-01-29 2018-08-28 出门问问信息科技有限公司 voice assistant awakening method, device, equipment and storage medium
CN108564945A (en) * 2018-03-13 2018-09-21 斑马网络技术有限公司 Vehicle-mounted voice control method and device and electronic equipment and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960537A (en) * 2019-03-29 2019-07-02 北京金山安全软件有限公司 Interaction method and device and electronic equipment
CN112346621A (en) * 2019-08-08 2021-02-09 北京车和家信息技术有限公司 Virtual function button display method and device
CN114489420A (en) * 2022-01-14 2022-05-13 维沃移动通信有限公司 Voice information sending method and device and electronic equipment

Similar Documents

Publication Publication Date Title
KR102358656B1 (en) Devices, methods, and graphical user interfaces for providing haptic feedback
CN107957836B (en) Screen recording method and device and terminal
US20230161471A1 (en) Video-based interaction and video processing methods, apparatus, device, and storage medium
CN109521932A (en) Voice control display processing method, device, vehicle, storage medium and equipment
CN107452400A (en) Voice broadcast method and device, computer installation and computer-readable recording medium
CN113901239B (en) Information display method, device, equipment and storage medium
JPH0785243A (en) Data processing method
US10795554B2 (en) Method of operating terminal for instant messaging service
US20220368840A1 (en) Video-based conversational interface
CN110767234B (en) Audio information processing method and device, electronic equipment and storage medium
CN108984089B (en) Touch operation method and device, storage medium and electronic equipment
CN112631814B (en) Game scenario dialogue playing method and device, storage medium and electronic equipment
CN108845854A (en) Method for displaying user interface, device, terminal and storage medium
JP6176041B2 (en) Information processing apparatus and program
WO2023134470A1 (en) Page control method, apparatus and device, and storage medium
US20070293315A1 (en) Storage medium storing game program and game device
KR102134882B1 (en) Method for controlling contents play and an electronic device thereof
CN111135579A (en) Game software interaction method and device, terminal equipment and storage medium
CN111881395A (en) Page presenting method, device, equipment and computer readable storage medium
CN110237531A (en) Method, apparatus, terminal and the storage medium of game control
JP5073299B2 (en) GAME DEVICE AND GAME PROGRAM
WO2021232956A1 (en) Device control method and apparatus, and storage medium and electronic device
CN112584243A (en) Multimedia data processing method and device and electronic equipment
CN109859293B (en) Animation multi-state switching method and device for android device
CN113421577A (en) Video dubbing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190326