CN115202548A - Voice operation guiding method and device for application function, computer equipment and medium - Google Patents
Voice operation guiding method and device for application function, computer equipment and medium Download PDFInfo
- Publication number
- CN115202548A CN115202548A CN202210771672.7A CN202210771672A CN115202548A CN 115202548 A CN115202548 A CN 115202548A CN 202210771672 A CN202210771672 A CN 202210771672A CN 115202548 A CN115202548 A CN 115202548A
- Authority
- CN
- China
- Prior art keywords
- target
- suspension control
- user
- application function
- voice operation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000000725 suspension Substances 0.000 claims abstract description 131
- 238000007667 floating Methods 0.000 claims abstract description 49
- 230000006870 function Effects 0.000 claims description 116
- 238000004590 computer program Methods 0.000 claims description 25
- 238000005339 levitation Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application relates to a voice operation guiding method and device of an application function, computer equipment and a storage medium. The method comprises the following steps: displaying a suspension control on a target interface; receiving specific operation of a user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of the target application function; and responding to the specific operation, and guiding the user to perform the target voice operation. The embodiment of the application can help the user to be familiar with the voice operation of the target application function quickly.
Description
Technical Field
The present application relates to the field of information processing technologies, and in particular, to a method and an apparatus for guiding voice operations of an application function, a computer device, and a storage medium.
Background
The following statements are merely provided to provide background information related to the present application and may not necessarily constitute prior art.
With the development of voice technology, the voice functions supported by current voice products are various and relatively complex, and for users, when the voice products are used in the early stage, all related using methods cannot be quickly and conveniently known.
Currently, voice products usually provide voice function instruction manuals to users. However, the inventor realizes that the instruction manual for voice function is too cumbersome to use, not intuitive and convenient enough, and not enough time and patience for the user to learn and use, so that the user cannot be familiar with the method for using the voice function of the voice product quickly.
Disclosure of Invention
The present application provides a method, an apparatus, a computer device, and a storage medium for guiding voice operation of an application function, which are directed to the above disadvantages or shortcomings.
The present application provides, according to a first aspect, a method for guiding a voice operation of an application function, which in one embodiment includes:
displaying a suspension control on a target interface;
receiving specific operation of a user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of the target application function;
in response to the specific operation, the presentation target voice-operated presentation contents to guide the user to use the voice-operated target application function.
In one embodiment, the specific operation comprises that a user drags the floating control to the target controllable area of the target interface, and the floating control stays in the target controllable area for a preset time length.
In one embodiment, receiving a user specific operation for a hover control includes:
receiving the moving operation of a user on the suspension control;
responding to the moving operation, and acquiring the current suspension position of the suspension control;
judging whether the suspension control is located in the target controllable area or not according to the current suspension position;
if the suspension control is located in the target controllable area, acquiring the stay time of the suspension control, and comparing the stay time with a preset time;
and if the staying time is not less than the preset time, judging that the specific operation is received.
In one embodiment, obtaining the dwell time of the hovering control further comprises:
when the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping timing;
or the like, or a combination thereof,
and when the suspension control is detected to be positioned in the target controllable area and the suspension control stops moving, timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the suspension control which restarts moving exceeds a preset distance, stopping timing.
In one embodiment, the determining whether the levitation control is located within the target steerable area according to the current levitation position includes:
acquiring position information of a controllable area corresponding to each application function supporting voice operation of a target application;
judging whether the suspension control is positioned in an operable area corresponding to any application function supporting voice operation of the target application or not according to the current suspension position and the position information of the operable area corresponding to each application function supporting voice operation;
and if so, judging that the suspension control is positioned in the target controllable area.
In one embodiment, the method for guiding the user to perform the target voice operation further comprises the following steps:
acquiring specific information, wherein the specific information comprises the current scene type and/or the demonstration times of the target application function;
and acquiring target voice operation demonstration content for guiding the user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content is the voice operation demonstration content corresponding to the target application function.
In one embodiment, when the specific information includes the current scene type, the obtaining of the specific information further includes:
acquiring current scene information;
and determining the current scene type according to the current scene information.
According to a second aspect, the present application provides a voice-operated guidance apparatus for an application function, which in one embodiment comprises:
the display module is used for displaying the suspension control on the target interface;
the operation receiving module is used for receiving specific operations of a user for the floating control, wherein the specific operations comprise operations for indicating that voice operations of the target application function are demonstrated;
and the guiding module is used for responding to the specific operation and guiding the user to carry out the target voice operation.
According to a third aspect, the present application provides a computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of an embodiment of any of the methods described above when executing the computer program.
The present application provides according to a fourth aspect a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the embodiments of the method of any of the above.
In the above embodiments of the present application, a user may start a voice operation guidance mode for a target application function by performing a specific operation on a floating control on an interface, and after the voice operation demonstration mode is triggered, the user may be guided to perform a target voice operation. Compared with the conventional voice function instruction manual, the user can skip the steps of turning over the instruction manual complicatedly by using the technical scheme provided by the embodiment, and can directly and quickly acquire the voice operation mode of the function in the application or target operation interface through the floating control when seeing an unfamiliar application or operation interface.
Drawings
Fig. 1 is an application environment diagram of a voice operation guidance method for an application function according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a voice operation guidance method for an application function according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a voice operation guiding method for an application function according to another embodiment of the present application;
fig. 4 is a block diagram illustrating a voice operation guidance apparatus for application functions according to one or more embodiments of the present application;
FIG. 5 is a block diagram of an internal portion of a computer device according to one or more embodiments of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The application provides a voice operation guiding method of an application function. In this embodiment, the voice operation guidance method of the application function may be applied to the application environment as shown in fig. 1. The user 10 may start a voice operation guidance mode for the target application function by performing a specific operation on a floating control on an interface displayed in the electronic device 20, and after the voice operation demonstration mode is triggered, the user may be guided to perform a target voice operation, specifically, voice operation demonstration content corresponding to the target application function, that is, target voice operation demonstration content, may be demonstrated to the user 10 to guide the user to use the target voice operation, where the target voice operation refers to using voice to operate the target application function, for example, using voice to start the target application function, and taking a music function as an example, the user may issue a related instruction with voice, such as "play qilixiang", "pause", and the like.
The electronic device 20 may include, but is not limited to, various car machines, car terminals, personal computers, notebook computers, smart phones, tablet computers, desktop computers, and the like.
The method for guiding voice operation of application function provided in this embodiment includes the steps shown in fig. 2, and the following description will take the application of this method to the electronic device in fig. 1 as an example.
S110: and displaying the floating control on the target interface.
S120: and receiving specific operation of the user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of the target application function.
S130: and responding to the specific operation, and guiding the user to perform the target voice operation.
The suspension control can be a suspension window, a suspension ball, a suspension card or a suspension icon. The floating control can be specifically a voice assistant control, can stay at any position of the interface in a floating manner, and can also be dragged to any position of the interface.
The user can do a specific operation on the hover control to instruct the electronic device to enter a voice operation guidance mode for the target application function. The electronic device may determine which application function of which application needs to be subjected to voice operation demonstration based on a specific operation performed by a user, and a specific operation mode of the specific operation may be set according to an actual scene need, which is not particularly limited in this embodiment.
After detecting the specific operation, the electronic device enters a voice operation guidance mode for the target application function, and starts to demonstrate voice operation demonstration content corresponding to the target application function, namely the target voice operation demonstration content, so as to guide the user to use the voice operation target application function. The voice operation demonstration content corresponding to each application function can be content in different forms such as characters, voice, pictures and/or videos. In addition, the target application function is not specifically limited in this embodiment, and the target application function may be any application function supporting a user to issue an instruction through voice in the target application, and taking a music application as an example, the application function may be an application function such as "song recommendation", "setting", "song switching mode", "pause/start", "comment", and the like. Further, the target application refers to any application in the electronic device that includes an application function supporting a voice operation.
Compared with the conventional voice function instruction manual, the technical scheme provided by the embodiment can be used for a user to quickly acquire the voice operation mode of the function in the application or target operation interface directly through the suspension control without tediously turning over the instruction manual when the user sees an unfamiliar application or operation interface.
In one embodiment, the specific operation may include the user dragging the floating control to the target controllable area of the target interface and making the floating control stay in the target controllable area for a preset time length.
The target controllable area refers to a controllable area corresponding to the target application function.
In this embodiment, assuming that a user wants to use a voice function of a certain application or a certain application function of a certain application, but is not familiar with a specific operation of the voice function, the user may actively drag the hover control to stay in the controllable area corresponding to the application or the application function, and keep the hover control stay in the controllable area for a preset time, for example, after staying for 2 seconds, the electronic device may automatically enter a voice operation guidance mode of the application function, and actively demonstrate how the application function performs voice conversation interaction for the user.
Because the user can find the controllable area of each application function very conveniently and swiftly on the display screen, therefore through the specific operation that this embodiment provided, the user can demonstrate to which application function's voice operation very definitely instructing electronic equipment to improve the convenience that the instruction was assigned, and still be difficult to the maloperation that appears.
In one embodiment, the specific operation may be a continuous touch operation, for example, a user drags the hover control with a finger from the default position to the controllable region of the target application function, and keeps the finger continuously touching the screen until the dwell time of the hover control on the controllable region reaches a preset time, so that the specific operation is completed.
Further, based on the foregoing embodiments, in an embodiment, receiving a specific operation of the user for the hover control, as shown in fig. 3, includes:
s121: receiving the moving operation of a user on the suspension control;
s122: responding to the moving operation, and acquiring the current suspension position of the suspension control;
s123: judging whether the suspension control is located in the target controllable area or not according to the current suspension position;
s124: if the suspension control is located in the target controllable area, acquiring the stay time of the suspension control, and comparing the stay time with a preset time;
s125: and if the stay time is not less than the preset time, judging that the specific operation is received.
The moving operation may be a touch operation or a sliding operation of a user on the floating control, and after receiving the touch operation or the sliding operation of the user on the floating control, the electronic device displays the movement of the floating control on the display screen, and simultaneously acquires the current floating position (the coordinate of the floating control in the display screen) of the floating control in real time, and determines whether the floating control is located in the target controllable area based on the current floating position.
If the suspension control is judged to be located in the target controllable area, the stay time of the suspension control is obtained, the stay time is compared with the preset time, and if the stay time is not less than the preset time, the specific operation is judged to be received; and if the stay time is less than the preset time, judging that the specific operation is not received.
And if the floating control is not positioned in the target controllable area, no operation is executed.
Correspondingly, the method for acquiring the dwell time of the levitation control further comprises the following step (1) or step (2):
(1) When the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping timing;
(2) And when the suspension control is detected to be located in the target controllable area and the suspension control stops moving, timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the restart moving exceeds the preset distance, stopping timing.
The dwell time of the floating control on the controllable area can be calculated in two ways.
One way is that when the user drags the floating control to the controllable area of the target application function and the finger is kept still, the electronic device starts to calculate the stay time, and the finger of the user cannot move or cannot move beyond a preset distance during the timing of the stay time, which makes it easier for the user to turn on the timing of the voice operation demonstration mode, but if the user moves carelessly or the movement distance is too large when the electronic device calculates the stay time, the user may need to operate a specific operation again.
In another mode, when the user drags the floating control to the controllable area of the target application function, the electronic device starts to calculate the staying time length, the finger of the user can move during the timing period of the staying time length, and the calculation of the staying time length is not affected as long as the contact point of the finger is kept in the controllable area of the target application function.
In one embodiment, the target interface is a first interface or a second interface; the first interface comprises an interactive interface (an interface presented when the target application is not opened, such as a system desktop), the interactive interface comprises an application icon of the target application; the second interface refers to an application interface (an interface presented after the target application is opened) of the target application; the target application refers to any application in the electronic equipment, which comprises an application function supporting voice operation; the target application function refers to any application function of the target application that supports voice operation.
Correspondingly, whether the suspension control is located in the target controllable area or not is judged according to the current suspension position, and the method comprises the following steps (1) to (3):
(1) Acquiring position information of a controllable area corresponding to each application function supporting voice operation of a target application;
(2) Judging whether the suspension control is positioned in a controllable area corresponding to any application function supporting voice operation of the target application or not according to the current suspension position and the position information of the controllable area corresponding to each application function supporting voice operation;
(3) If the judgment result is yes, the suspension control is judged to be located in the target controllable area.
Further, if the judgment result is negative, it is judged that the levitation control is not located in the target controllable area.
Since the position of the application icon or each application function in the application interface in the interface is usually fixed, the position information of the controllable region of each application function supporting the voice operation of the target application can be recorded in advance, and the controllable region can be a rectangle, so that the position information can be the position information of two points of the rectangle, such as the coordinates of the upper left corner and the lower right corner, or the coordinates of the lower left corner and the upper right corner.
Wherein, the suspension position of the suspension control can be realized by one coordinate or a plurality of coordinates. During the judgment, if at least one position coordinate of the suspension control is in the controllable area corresponding to a certain application function supporting the voice operation, the suspension control is judged to be located in the controllable area corresponding to the application function, and if not, the suspension control is judged not to be located in the controllable area corresponding to the application function.
Understandably, if the user drags the floating control to the controllable area corresponding to the non-target application function (referring to the application function which does not support the voice operation), or drags the floating control to the target controllable area, however, the time length for the floating control to stay does not reach the preset time length, the electronic device does not respond to the operation of the user. For the above description, it is determined whether the floating control is located in the controllable region corresponding to the non-target application function.
Furthermore, after the user completes a specific operation, the floating control can automatically return to the default position (which can be any position on the side of the screen) in the interface, and the interruption of the voice operation demonstration mode due to the fact that the misoperation of the user is received is avoided. In addition, the floating control is at a non-default position, and if a new touch signal is not received within a preset time period, the floating control automatically returns to the default position, so that the floating control is prevented from being continuously stopped at a certain non-default position.
In one embodiment, the method for guiding the user to perform the target voice operation further comprises the following steps: acquiring specific information, wherein the specific information comprises the current scene type and/or the demonstration times of the target application function; and acquiring target voice operation demonstration content for guiding the user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content comprises voice operation demonstration content corresponding to the target application function.
The embodiment can adaptively select the voice operation demonstration content corresponding to the target application function based on the current scene and/or the use time of the user, and the content demonstrated to the user is changed when the scenes are different and/or the use time is different, so that the user can quickly learn the related use method, the familiarity and the use frequency of the user are quickly improved, and better use experience of learning the voice operation is brought to the user.
Illustratively, taking an electronic device as a car machine as an example, and a target application as a video application as an example, when the car machine is in a high-speed driving state and a user uses a demonstration mode for the video application for the first time, the voice operation demonstration mode content of the video application needs to be shown to the user in a simplified and summarized form, and at the same time, the user needs to be prompted to reduce or prohibit viewing of the video content in order to ensure driving safety in the high-speed driving state.
The scene type may be set according to an actual scene, for example, the scene type is high-speed driving, urban driving, and the like. The number of presentations may be a historical total number of presentations for the target application function or may be a total number of presentations within a predetermined time. In addition, corresponding presentation content presentation rules can be preset for different scene types or different presentation times, and then corresponding voice operation presentation content can be acquired as target voice operation presentation content based on the presentation content presentation rules.
Further, in an embodiment, when the specific information includes the current scene type, before acquiring the specific information, the method further includes: acquiring current scene information; and determining the current scene type according to the current scene information.
The scene information to be acquired can also be selected according to the specifically set scene type. Taking the high-speed driving scene type as an example, one or more of the current driving speed, the geographic position, the weather condition, the road surface condition, and the traffic condition of the electronic device may be obtained for analysis.
It should be noted that, with respect to the steps included in the voice operation guidance method for application functions provided in any of the above embodiments, the steps are not strictly limited in order of execution unless explicitly stated herein, and may be executed in other orders. Moreover, at least some of the steps may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least some of the sub-steps or stages of other steps.
Based on the same inventive concept, the application also provides a voice operation guiding device with application function. In this embodiment, as shown in fig. 4, the voice operation guidance apparatus of the application function includes the following modules:
the display module 110 is configured to display a floating control on a target interface;
an operation receiving module 120, configured to receive a specific operation of a user for a hover control, where the specific operation includes an operation for instructing to perform a demonstration on a voice operation of a target application function;
and a guiding module 130 for guiding the user to perform the target voice operation in response to the specific operation.
In one embodiment, the specific operation includes that the user drags the floating control to the target controllable area of the target interface, and the floating control stays in the target controllable area for a preset time length.
In one embodiment, the operation receiving module 120 includes:
the receiving sub-module is used for receiving the moving operation of the user on the suspension control;
the acquisition submodule is used for responding to the moving operation and acquiring the current suspension position of the suspension control;
the judgment sub-module is used for judging whether the suspension control is positioned in the target controllable area according to the current suspension position;
the comparison sub-module is used for acquiring the stay time of the suspension control when the suspension control is positioned in the target controllable area and comparing the stay time with the preset time;
and the judgment submodule is used for judging that the specific operation is received when the staying time length is not less than the preset time length.
In one embodiment, the receiving module 120 is operated, further comprising a timing sub-module. The timing sub-module is used for executing the following steps before the stay time of the suspension control is obtained:
when the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping timing;
or the like, or a combination thereof,
and when the suspension control is detected to be located in the target controllable area and the suspension control stops moving, timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the restart moving exceeds the preset distance, stopping timing.
In one embodiment, the target interface is the first interface or the second interface; the first interface comprises an interactive interface, and the interactive interface comprises an application icon of the target application; the second interface comprises an application interface of the target application; the target application refers to any application in the electronic equipment, which comprises an application function supporting voice operation; the target application function refers to any application function of the target application which supports voice operation.
In one embodiment, the determining sub-module includes:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring position information of a controllable area corresponding to each application function supporting voice operation of a target application;
the judging unit is used for judging whether the suspension control is positioned in an operable area corresponding to any one of the application functions supporting the voice operation of the target application according to the current suspension position and the position information of the operable area corresponding to each application function supporting the voice operation;
the judging unit is used for responding to the judgment result that the suspension control is located in the target controllable area; and responding to the judgment result to judge that the suspension control is not located in the target controllable area.
In one embodiment, the above apparatus further comprises:
the first acquisition module is used for acquiring specific information, wherein the specific information comprises the current scene type and/or the demonstration times of the target application function;
and the second acquisition module is used for acquiring target voice operation demonstration content for guiding the user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content is the voice operation demonstration content corresponding to the target application function.
In one embodiment, the above apparatus further comprises:
the third acquisition module is used for acquiring current scene information;
and the second determining module is used for determining the current scene type according to the current scene information.
For specific limitations of the voice operation guidance apparatus for the application function, reference may be made to the above limitations of the voice operation guidance method for the application function, which are not described herein again. The respective modules in the voice operation guidance apparatus for applying the functions described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, an internal structure of which may be as shown in fig. 5.
The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing data such as voice operation demonstration content corresponding to each application function of each target application, and the specific stored data can also refer to the limitations in the above method embodiments. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a voice operation guidance method of an application function.
It will be appreciated by those skilled in the art that the configuration shown in fig. 5 is a block diagram of only a portion of the configuration associated with the present application, and is not intended to limit the computing device to which the present application may be applied, and that a particular computing device may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
The present embodiment also provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the following steps:
displaying a suspension control on a target interface; receiving specific operation of a user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of the target application function; and responding to the specific operation, and guiding the user to perform the target voice operation.
In one embodiment, the specific operation includes that the user drags the floating control to the target controllable area of the target interface, and the floating control stays in the target controllable area for a preset time length.
In one embodiment, the processor executes the computer program, and when receiving a specific operation of a user for the floating control, further implements the following steps:
receiving the moving operation of a user on the suspension control; responding to the moving operation, and acquiring the current suspension position of the suspension control; judging whether the suspension control is positioned in the target controllable area or not according to the current suspension position; if the suspension control is located in the target controllable area, acquiring the stay time of the suspension control, and comparing the stay time with a preset time; and if the staying time is not less than the preset time, judging that the specific operation is received.
In one embodiment, the processor executes the computer program to realize the following steps before acquiring the dwell time of the floating control:
when the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping timing; or when the suspension control is detected to be located in the target controllable area and the suspension control stops moving, timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the restart moving exceeds the preset distance, stopping timing.
In one embodiment, when the processor executes the computer program to determine whether the levitation control is located in the target controllable region according to the current levitation position, the following steps are further implemented:
acquiring position information of a controllable area corresponding to each application function supporting voice operation of a target application; judging whether the suspension control is positioned in a controllable area corresponding to any application function supporting voice operation of the target application or not according to the current suspension position and the position information of the controllable area corresponding to each application function supporting voice operation; responding to the judgment result that the suspension control is located in the target controllable area; and responding to the judgment result of no, and judging that the suspension control is not positioned in the target controllable area.
In one embodiment, the processor executes the computer program to perform the following steps before guiding the user to perform the target voice operation:
acquiring specific information, wherein the specific information comprises the current scene type and/or the demonstration times of the target application function; and acquiring target voice operation demonstration content for guiding a user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content is the voice operation demonstration content corresponding to the target application function.
In one embodiment, when the specific information includes the current scene type, the processor executes the computer program to further perform the following steps before the specific information is acquired:
acquiring current scene information; and determining the current scene type according to the current scene information.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
displaying a suspension control on a target interface; receiving specific operation of a user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of the target application function; and responding to the specific operation, and guiding the user to perform target voice operation.
In one embodiment, the specific operation includes that the user drags the floating control to the target controllable area of the target interface, and the floating control stays in the target controllable area for a preset time length.
In one embodiment, when the computer program is executed by a processor and realizes that a specific operation of a user for the floating control is received, the following steps are also realized:
receiving the moving operation of a user on the suspension control; responding to the moving operation, and acquiring the current suspension position of the suspension control; judging whether the suspension control is positioned in the target controllable area or not according to the current suspension position; if the suspension control is located in the target controllable area, acquiring the stay time of the suspension control, and comparing the stay time with a preset time; and if the staying time is not less than the preset time, judging that the specific operation is received.
In one embodiment, the computer program is executed by a processor to implement the following steps before obtaining the dwell time of the floating control:
when the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping timing; or when the suspension control is detected to be located in the target controllable area and the suspension control stops moving, timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the restart moving exceeds the preset distance, stopping timing.
In one embodiment, when the computer program is executed by the processor to determine whether the levitation control is located in the target controllable area according to the current levitation position, the following steps are also performed:
acquiring position information of a controllable area corresponding to each application function supporting voice operation of a target application; judging whether the suspension control is positioned in a controllable area corresponding to any application function supporting voice operation of the target application or not according to the current suspension position and the position information of the controllable area corresponding to each application function supporting voice operation; responding to the judgment result that the suspension control is located in the target controllable area; and responding to the judgment result to judge that the suspension control is not positioned in the target controllable area.
In one embodiment, the computer program is executed by a processor to perform the following steps before guiding a user to perform a target voice operation:
acquiring specific information, wherein the specific information comprises the current scene type and/or the demonstration times of the target application function; and acquiring target voice operation demonstration content for guiding a user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content is the voice operation demonstration content corresponding to the target application function.
In one embodiment, when the specific information includes the current scene type, the computer program is executed by the processor to perform the following steps before the specific information is acquired:
acquiring current scene information; and determining the current scene type according to the current scene information.
In the embodiments of the present application, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to the related descriptions of other embodiments.
It will be understood by those skilled in the art that all or part of the processes of the embodiments of the methods described above may be implemented by hardware that is related to instructions of a computer program, where the computer program may be stored in a non-volatile computer-readable storage medium, and when executed, the computer program may include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), rambus (Rambus), direct RAM (RDRAM), direct bused dynamic RAM (DRDRAM), and bused dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent application shall be subject to the appended claims.
Claims (10)
1. A method for directing voice operations of an application function, the method comprising:
displaying a suspension control on a target interface;
receiving specific operation of a user for the floating control, wherein the specific operation comprises operation for indicating demonstration of voice operation of a target application function;
and responding to the specific operation, and guiding the user to perform target voice operation.
2. The method of claim 1, wherein the specific operation comprises a user dragging the hover control to a target manipulable region of the target interface and causing the hover control to dwell within the target manipulable region for a preset length of time.
3. The method of claim 1 or 2, wherein receiving a user-specific operation with respect to the hover control comprises:
receiving the moving operation of a user on the suspension control;
responding to the moving operation, and acquiring the current suspension position of the suspension control;
judging whether the suspension control is positioned in a target controllable area or not according to the current suspension position;
if the suspension control is located in the target controllable area, acquiring the stay time of the suspension control, and comparing the stay time with a preset time;
and if the stay time is not less than the preset time, judging that the specific operation is received.
4. The method of claim 3, wherein obtaining the dwell time of the hover control further comprises:
when the suspension control is detected to be located in the target controllable area, timing the stay time of the suspension control, and when the suspension control is detected to be dragged out of the target controllable area, stopping the timing;
or the like, or, alternatively,
and when the suspension control is detected to be located in the target controllable area and the suspension control stops moving, starting timing the stay time of the suspension control, and when the suspension control is detected to restart moving or the distance of the restart moving exceeds a preset distance, stopping timing.
5. The method of claim 3 or 4,
judging whether the suspension control is located in a target controllable area according to the current suspension position, including:
acquiring position information of a controllable area corresponding to each application function supporting voice operation of the target application;
judging whether the suspension control is positioned in an operable area corresponding to any application function supporting voice operation of the target application or not according to the current suspension position and the position information of the operable area corresponding to each application function supporting voice operation;
and if so, judging that the suspension control is positioned in the target controllable area.
6. The method of any of claims 2-5, wherein the guiding the user to perform the target voice operation is preceded by:
acquiring specific information, wherein the specific information comprises a current scene type and/or demonstration times of the target application function;
and acquiring target voice operation demonstration content for guiding a user to perform target voice operation according to the specific information, wherein the target voice operation demonstration content comprises the voice operation demonstration content corresponding to the target application function.
7. The method of claim 6, wherein when the specific information includes the current scene type, the obtaining specific information further comprises, before:
acquiring current scene information;
and determining the current scene type according to the current scene information.
8. A voice operation guidance apparatus for an application function, the apparatus comprising:
the display module is used for displaying the suspension control on the target interface;
the operation receiving module is used for receiving specific operations of a user for the floating control, wherein the specific operations comprise operations for indicating that voice operations of the target application function are demonstrated;
and the guiding module is used for responding to the specific operation and guiding the user to carry out the target voice operation.
9. A computer arrangement comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210771672.7A CN115202548B (en) | 2022-06-30 | 2022-06-30 | Voice operation guiding method and device for application function, computer equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210771672.7A CN115202548B (en) | 2022-06-30 | 2022-06-30 | Voice operation guiding method and device for application function, computer equipment and medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115202548A true CN115202548A (en) | 2022-10-18 |
CN115202548B CN115202548B (en) | 2024-10-18 |
Family
ID=83578126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210771672.7A Active CN115202548B (en) | 2022-06-30 | 2022-06-30 | Voice operation guiding method and device for application function, computer equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115202548B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024139173A1 (en) * | 2022-12-31 | 2024-07-04 | 科大讯飞股份有限公司 | Interface prompt method and apparatus based on voice interaction, and device and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130051234A (en) * | 2011-11-09 | 2013-05-20 | 삼성전자주식회사 | Visual presentation method for application in portable and apparatus thereof |
CN107562336A (en) * | 2017-08-01 | 2018-01-09 | 努比亚技术有限公司 | A kind of method, equipment and computer-readable recording medium for controlling suspension ball |
CN108920238A (en) * | 2018-06-29 | 2018-11-30 | 上海连尚网络科技有限公司 | Operate method, electronic equipment and the computer-readable medium of application |
CN109491562A (en) * | 2018-10-09 | 2019-03-19 | 珠海格力电器股份有限公司 | Interface display method of voice assistant application program and terminal equipment |
CN109801625A (en) * | 2018-12-29 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Control method, device, user equipment and the storage medium of virtual speech assistant |
CN110472095A (en) * | 2019-08-16 | 2019-11-19 | 百度在线网络技术(北京)有限公司 | Voice guide method, apparatus, equipment and medium |
CN110544473A (en) * | 2018-05-28 | 2019-12-06 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device |
CN114168020A (en) * | 2021-12-10 | 2022-03-11 | Oppo广东移动通信有限公司 | Interaction method, interaction device, terminal equipment and readable storage medium |
-
2022
- 2022-06-30 CN CN202210771672.7A patent/CN115202548B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130051234A (en) * | 2011-11-09 | 2013-05-20 | 삼성전자주식회사 | Visual presentation method for application in portable and apparatus thereof |
CN107562336A (en) * | 2017-08-01 | 2018-01-09 | 努比亚技术有限公司 | A kind of method, equipment and computer-readable recording medium for controlling suspension ball |
CN110544473A (en) * | 2018-05-28 | 2019-12-06 | 百度在线网络技术(北京)有限公司 | Voice interaction method and device |
CN108920238A (en) * | 2018-06-29 | 2018-11-30 | 上海连尚网络科技有限公司 | Operate method, electronic equipment and the computer-readable medium of application |
CN109491562A (en) * | 2018-10-09 | 2019-03-19 | 珠海格力电器股份有限公司 | Interface display method of voice assistant application program and terminal equipment |
CN109801625A (en) * | 2018-12-29 | 2019-05-24 | 百度在线网络技术(北京)有限公司 | Control method, device, user equipment and the storage medium of virtual speech assistant |
CN110472095A (en) * | 2019-08-16 | 2019-11-19 | 百度在线网络技术(北京)有限公司 | Voice guide method, apparatus, equipment and medium |
CN114168020A (en) * | 2021-12-10 | 2022-03-11 | Oppo广东移动通信有限公司 | Interaction method, interaction device, terminal equipment and readable storage medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024139173A1 (en) * | 2022-12-31 | 2024-07-04 | 科大讯飞股份有限公司 | Interface prompt method and apparatus based on voice interaction, and device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN115202548B (en) | 2024-10-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017118329A1 (en) | Method and apparatus for controlling tab bar | |
US20210064222A1 (en) | Screenshot capturing method, device, electronic device and computer-readable medium | |
CN104679427B (en) | Terminal split-screen display method and system | |
CN109589605B (en) | Game display control method and device | |
US20110231796A1 (en) | Methods for navigating a touch screen device in conjunction with gestures | |
CN106873886B (en) | Control method and device for stereoscopic display and electronic equipment | |
CN111831205B (en) | Device control method, device, storage medium and electronic device | |
CN111228810A (en) | Control method and device of virtual rocker, electronic equipment and storage medium | |
US12086395B2 (en) | Device control method, storage medium, and non-transitory computer-readable electronic device | |
US11016788B2 (en) | Application launching method and display device | |
CN115202548A (en) | Voice operation guiding method and device for application function, computer equipment and medium | |
CN109298790A (en) | Character deleting method and device | |
CN108920266A (en) | program switching method, intelligent terminal and computer readable storage medium | |
CN112445566B (en) | Page display method and device, electronic equipment and storage medium | |
US9971413B2 (en) | Positioning method and apparatus | |
US20160170596A1 (en) | Image display apparatus, image display method, and image-display program product | |
US20170168694A1 (en) | Method and electronic device for adjusting sequence of shortcut switches in control center | |
CN114518832B (en) | Display control method and device of touch terminal and electronic equipment | |
CN113849082B (en) | Touch processing method and device, storage medium and mobile terminal | |
JP7422784B2 (en) | Information processing device, information processing method, and program | |
KR102031104B1 (en) | Web browser display apparatus and web browser display method | |
KR101932698B1 (en) | Method, application and device for providing user interface | |
KR20190011227A (en) | Method, application and device for providing user interface | |
CN110618750A (en) | Data processing method, device and machine readable medium | |
US12086400B2 (en) | Method, electronic device, and storage medium for displaying shortcut identification card and application identification card |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |