CN117453090A - Method, device, equipment, chip and storage medium for calling intelligent assistant - Google Patents

Method, device, equipment, chip and storage medium for calling intelligent assistant Download PDF

Info

Publication number
CN117453090A
CN117453090A CN202311397744.7A CN202311397744A CN117453090A CN 117453090 A CN117453090 A CN 117453090A CN 202311397744 A CN202311397744 A CN 202311397744A CN 117453090 A CN117453090 A CN 117453090A
Authority
CN
China
Prior art keywords
text input
intelligent assistant
terminal
user
screen
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311397744.7A
Other languages
Chinese (zh)
Inventor
孙锦琳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202311397744.7A priority Critical patent/CN117453090A/en
Publication of CN117453090A publication Critical patent/CN117453090A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72469User interfaces specially adapted for cordless or mobile telephones for operating the device by selecting functions from two or more displayed items, e.g. menus or icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method, a device, equipment, a chip and a storage medium for calling an intelligent assistant, wherein the method comprises the following steps: displaying a navigation bar on the side of a screen of the terminal, wherein the functions of the navigation bar comprise a task management function and a character input function of an intelligent assistant; the task management function is used for managing tasks in the operation of the terminal by a user, and the text input function is used for inputting text to the terminal by the user so as to interact with an intelligent assistant of the terminal; receiving a first operation of a user on a navigation bar; in response to the first operation, a text input function is invoked. The method can call the character input function of the intelligent assistant based on the first operation of the navigation bar by the user, so that the convenience of calling the character input function of the intelligent assistant by the user can be improved.

Description

Method, device, equipment, chip and storage medium for calling intelligent assistant
Technical Field
The present application relates to the field of terminals, and in particular, to a method, apparatus, device, chip and storage medium for invoking an intelligent assistant.
Background
The relevant data indicate that a part of the users are more prone to interact with the intelligent assistant of the terminal by means of text input. However, in the current terminal, in order to use the text input function of the intelligent assistant, the voice interface of the intelligent assistant is required to be called first, and then the text input function of the intelligent assistant is required to be called by clicking a keyboard icon on the voice interface. The steps for calling the text input function of the intelligent assistant by the mode are complicated, and the operation of a user is inconvenient.
Disclosure of Invention
The application provides at least one method, device, equipment, chip and storage medium for calling an intelligent assistant.
The technical scheme of the application is realized as follows:
in a first aspect, the present application provides a method for evoking an intelligent assistant, applied to a terminal, the method comprising: displaying a navigation bar on the side of a screen of the terminal, wherein the functions of the navigation bar comprise a task management function and a character input function of an intelligent assistant; the task management function is used for managing tasks in the operation of the terminal by a user, and the text input function is used for inputting text to the terminal by the user so as to interact with an intelligent assistant of the terminal; receiving a first operation of a user on a navigation bar; in response to the first operation, a text input function is invoked.
In a second aspect, embodiments of the present application provide an apparatus for evoking an intelligent assistant, the apparatus comprising: the first display unit is used for displaying a navigation bar on the side edge of a screen of the device, and the functions of the navigation bar comprise a task management function and a character input function of an intelligent assistant; the task management function is used for managing tasks in the running process of the device by a user, and the text input function is used for inputting text to the device by the user so as to interact with an intelligent assistant of the device; the first receiving unit is used for receiving a first operation of the navigation bar by a user; and a calling unit for calling the text input function in response to the first operation.
In a third aspect, embodiments of the present application provide a device for evoking an intelligent assistant, the device comprising a memory and a processor; wherein the memory is used for storing computer executable instructions; a processor, coupled to the memory, for implementing the method according to the first aspect by executing the computer-executable instructions.
In a fourth aspect, embodiments of the present application provide a chip. The chip comprises: a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method as described in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium storing a computer program which, when executed by at least one processor, implements a method according to the first aspect.
In the embodiment of the application, the terminal can display the navigation bar on the side edge of the screen, and further, the text input function of the intelligent assistant can be called based on the first operation of the navigation bar by the user, so that the call path of the text input function of the intelligent assistant can be shortened, the convenience of the user for calling the text input function of the intelligent assistant is improved, and the user experience is optimized.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the aspects of the present application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a flow chart of a prior art method for invoking an intelligent assistant;
FIG. 2 is a second flow chart for invoking an intelligent assistant in the prior art;
FIG. 3 is a flow chart of a method for invoking an intelligent assistant according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of a text input function for calling an intelligent assistant according to an embodiment of the present application;
FIG. 5 is a schematic diagram of the structure of a device for calling an intelligent assistant according to an embodiment of the present application;
fig. 6 is a schematic diagram of a hardware entity of a device that evokes an intelligent assistant in an embodiment of the present application.
Detailed Description
For a more complete understanding of the features and technical content of the embodiments of the present application, reference should be made to the following detailed description of the embodiments of the present application, taken in conjunction with the accompanying drawings, which are for purposes of illustration only and not intended to limit the embodiments of the present application.
Unless defined otherwise, all technical and scientific terms used in the examples of this application have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the embodiments of the application is for the purpose of describing the embodiments of the application only and is not intended to be limiting of the application.
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is to be understood that "some embodiments" can be the same subset or different subsets of all possible embodiments and can be combined with one another without conflict. It should also be noted that the term "first/second/third" in reference to the embodiments of the present application is used merely to distinguish similar objects and does not represent a specific ordering for the objects, it being understood that the "first/second/third" may be interchanged with a specific order or sequence, if allowed, to enable the embodiments of the present application described herein to be implemented in an order other than that illustrated or described herein.
It should be understood that the term "and/or" in the embodiments of the present application is merely an association relationship describing the association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
In the embodiments of the present application, the terminal may also be referred to as a terminal device or an electronic device. The terminal in the embodiment of the application may be, for example, a mobile phone, a tablet computer, a large-screen device, and the like.
The relevant data indicate that a part of the users are more prone to interact with the intelligent assistant of the terminal by means of text input. However, in the current terminal, in order to use the text input function of the intelligent assistant, the voice interface of the intelligent assistant is required to be called first, and then the text input function of the intelligent assistant is required to be called by clicking a keyboard icon on the voice interface.
As an example, as shown in fig. 1, a part of terminals (such as a mobile phone) displays an icon of an intelligent assistant on the right side of a search bar at the bottom of a first screen of a desktop, and by clicking the icon, a voice interface of the intelligent assistant can be called up (evoked). Further, by clicking on a keyboard icon on the voice interface, the text input function of the intelligent assistant may be evoked.
As another example, as shown in fig. 2, an application icon (app icon) of the intelligent assistant is displayed on a desktop of a part of the terminal (e.g., a mobile phone), and a voice interface of the intelligent assistant may be exhaled by clicking the application icon. Further, by clicking on a keyboard icon on the voice interface, the text input function of the intelligent assistant may be evoked.
The text input function that evokes the intelligent assistant in the above manner has the following disadvantages: the calling path is long, the voice interface of the intelligent assistant is called before the characters are input, and then the keyboard icon is clicked, so that the steps of the existing scheme are complicated and inconvenient for the user to operate under the background that the characters are more and more focused on the interaction with the intelligent assistant of the terminal.
In view of this, embodiments of the present application provide a method, apparatus, device, chip, and storage medium for invoking an intelligent assistant. In the method, the terminal can display the navigation bar at the side of the screen, and further, the text input function of the intelligent assistant can be evoked based on the first operation of the navigation bar by the user, so that the evoked path of the text input function of the intelligent assistant by the user can be shortened, the convenience of the user for evoked the text input function of the intelligent assistant is improved, and the user experience is optimized. In addition, the navigation bar in the method not only has the original task management function, but also has the text input function of the intelligent assistant, thereby increasing the practicability of the navigation bar and achieving the purpose of fully utilizing the navigation bar.
Embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 3 illustrates a method for calling an intelligent assistant, which is provided in an embodiment of the present application and may be applied to a terminal, where the method may include:
s301, displaying a navigation bar on the side of a screen of the terminal, wherein the functions of the navigation bar comprise a task management function and a text input function of an intelligent assistant.
In this step, the terminal may display a navigation bar at the side of the screen. Wherein, the screen side refers to the screen edge of the terminal. Taking a rectangular screen as an example, the screen side may be any one of four sides of the rectangular screen.
Among other things, task management may also be referred to as recent task management or multitasking. The task management function can be used for managing the task in the running process of the terminal by a user. For example, the functions that may be implemented by the task management function include, but are not limited to, at least one of: 1) View currently running tasks (e.g., applications): through task management, a user can check a task currently running on the terminal; 2) Switching tasks: the user can switch between different tasks running currently through task management; 3) And forcibly ending the currently running task.
The text input function of the intelligent assistant may be used for a user to input text to the terminal to interact with the intelligent assistant of the terminal. For example, the user may perform a dialogue/chat/question-and-answer with the terminal's intelligent assistant by entering text into the terminal; for another example, the user may instruct the terminal to execute a specific instruction, such as "turn on the camera", by inputting text into the terminal.
Illustratively, displaying the navigation bar on the screen side of the terminal may include: and displaying a navigation bar on the side of the screen of the terminal under the condition that the terminal is in any one of a screen locking state, a screen-off display (or screen-off display) state and a desktop state. That is, when the terminal is in any one of the lock state, the off-screen display state, and the desktop state, the navigation bar can be displayed on the screen side of the terminal. Thus, the user can use the text input function of the intelligent assistant when the terminal is in any one of the screen locking state, the screen-off display state and the desktop state. For example, when the terminal is in any one of a screen locking state, a screen-off display state and a desktop state, the user can execute a first operation on the navigation bar displayed on the screen, so that the text input function of the intelligent assistant is evoked, and the mode of the user for evoked the text input function of the intelligent assistant can be more efficient and convenient.
The screen locking state may also be referred to as a screen locking scene or a screen locking mode. The screen locking state refers to a state that the terminal automatically enters after being operated for a period of time without people. In the screen locking state, the user needs to input related unlocking information or perform corresponding operation according to the prompt to enter a system picture (such as entering a desktop state) of the mobile phone. The off-screen display (Always On Display, AOD) state may also be referred to as an off-screen display scene or an off-screen display mode. When the terminal is in the off-screen display state, most of the area can be extinguished on the terminal screen, and only part of the area display information is reserved. Desktop states may also be referred to as desktop scenes or desktop modes.
In some scenarios, the terminal may display the navigation bar on the screen side in any one of the screen locking state, the off-screen display state, and the desktop state, and may display the navigation bar on the screen side in other states (for example, in a state in which a certain application has been opened), which is not limited in the embodiments of the present application. As an implementation manner, the terminal may display the navigation bar on the screen side when the terminal is in any state, or, in other words, in a global scene of the terminal system, the navigation bar may be displayed on the screen side of the terminal. Thus, the user can conveniently and quickly call the text input function of the intelligent assistant.
S302, receiving a first operation of a navigation bar by a user.
In this step, the terminal may receive a first operation of the navigation bar by the user. Wherein the first operation may be for the terminal to invoke a text input function of the intelligent assistant.
In one possible way, the first operation may comprise, for example: a long press operation and a first sliding operation. The long-press operation can be used for enabling the terminal to activate a character input function of the intelligent assistant; the first sliding operation may be used to cause the terminal to display a text input interface of the intelligent assistant if the text input function is activated.
It should be noted that the first operation including the long press operation and the first slide operation is merely exemplary, and for example, in another possible manner, the long press operation may be replaced with a double click operation or the like. For convenience of explanation, the following is exemplified by the first operation including the long press operation and the first slide operation.
S303, responding to the first operation, and calling a text input function of the intelligent assistant.
In this step, the terminal may call a text input function of the intelligent assistant in response to the received first operation.
Taking the example that the first operation includes a long press operation and a first slide operation, the step of the terminal calling up the text input function of the intelligent assistant in response to the first operation may include the steps 31) and 32) of:
31 In response to the long press operation, a text input function of the intelligent assistant is activated.
When the terminal receives the long-press operation of the user on the navigation bar, the terminal can respond to the long-press operation to activate the text input function of the intelligent assistant. As an example, the long time required for the long press operation is greater than or equal to 0.5 seconds. That is, if the terminal detects that the long time of the user pressing the navigation bar for a long time is greater than or equal to 0.5 seconds, the user can be considered to perform a long-pressing operation on the navigation bar, so that the text input function of the intelligent assistant can be activated; if the terminal detects that the long time of the user pressing the navigation bar for a long time is less than 0.5 seconds, the user can be considered to not perform long pressing operation on the navigation bar, and in this case, the text input function cannot be activated or fails to be activated.
The above 0.5 seconds are only examples, and in practical applications, the long time required for the long press operation may be set to other values. The long time required for the long press operation may be a preconfigured value, for example, or may be customized by the user according to personal habits, which is not limited in the embodiment of the present application.
In one possible manner, after the text input function of the intelligent assistant is activated, if the terminal detects that the finger of the user is lifted from the screen, the terminal can be restored to a state in which the text input function of the intelligent assistant is not activated, and if the user needs to activate the text input function again, the long-press operation needs to be performed on the navigation bar again. In another possible manner, after the text input function of the intelligent assistant is activated, if the terminal detects that the finger of the user is lifted from the screen, the terminal may wait for a certain time (e.g., 2 seconds) and then restore the state to the state in which the text input function of the intelligent assistant is not activated.
According to the technical means, the terminal can activate the text input function of the intelligent assistant when receiving the long-press operation of the user on the navigation bar, so that the text input function of the intelligent assistant can be prevented from being activated by mistake under the condition that the text input function of the intelligent assistant is not required to be activated due to the fact that the user touches the navigation bar by mistake.
In some embodiments, the method may further comprise: in the case that the text input function of the intelligent assistant is activated, the brightness of the navigation bar is changed. For example, the brightness of the navigation bar may be changed from a first brightness to a second brightness to facilitate a user in discerning whether the text input function of the current intelligent assistant is activated. In one possible approach, the second intensity may be greater than the first intensity, i.e., when the text input function of the intelligent assistant is activated, the terminal may illuminate the navigation bar to indicate that the text input function of the current intelligent assistant has been activated.
32 In response to the first sliding operation, displaying a text input interface of the intelligent assistant for the user to input text to the terminal to interact with the intelligent assistant of the terminal, in case the text input function of the intelligent assistant is activated.
In the case that the text input function of the intelligent assistant is activated, the terminal may display a text input interface of the intelligent assistant in response to a first sliding operation of the navigation bar by the user. In this embodiment, the text input interface may be used for a user to input text into the terminal to interact with the terminal's intelligent assistant. That is, the user may input text to the terminal through the text input interface, thereby interacting with the intelligent assistant of the terminal. Accordingly, the terminal may receive text input by the user based on the text input interface. As one implementation, the text input interface may include, for example, a text input box (or text box), whereby a user may input text into the text input box to effect the input of text to the terminal.
In a possible case, after the text input function of the intelligent assistant is activated, if the terminal detects that the finger of the user is lifted from the screen, the terminal returns to a state in which the text input function of the intelligent assistant is not activated. In this case, the user is required to perform the first sliding operation immediately after performing the long press operation, and the finger should be kept from leaving the cell phone screen during this process. At this time, the start position of the first sliding operation is the same as the position of the touch point of the long press operation.
In another possible case, after the text input function of the intelligent assistant is activated, if the terminal detects that the finger of the user is lifted from the screen, the terminal may wait for a certain time (e.g., 2 seconds) and restore the state to a state in which the text input function of the intelligent assistant is not activated. In this case, the user is allowed to leave the screen for a short time (e.g., within 2 seconds of leaving time) after performing the long press operation to perform the first slide operation. At this time, the start position of the first sliding operation and the position of the touch point of the long press operation may be the same or different.
In some embodiments, in response to a first sliding operation, displaying a text input interface of the intelligent assistant, comprising: and displaying a text input interface of the intelligent assistant when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is greater than or equal to a first threshold value.
For example, assuming that the navigation bar is located at the lower side of the screen, when the terminal detects that the vertical distance between the sliding position and the lower side of the screen is greater than or equal to a first threshold value in the process of performing the first sliding operation, a text input interface of the intelligent assistant may be displayed.
The first threshold may be a preconfigured value, for example, or may be further customized by a user according to personal habits, which is not limited in the embodiments of the present application.
In some embodiments, in the event that the text input function of the intelligent assistant is activated, the method may further comprise: and when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is greater than or equal to the second threshold value and smaller than the first threshold value, displaying part of contents in the text input interface of the intelligent assistant.
For example, assuming that the navigation bar is located at the lower side of the screen, when the terminal detects that the vertical distance between the sliding position and the lower side of the screen is greater than or equal to the second threshold value and less than the first threshold value in the process of performing the first sliding operation, a portion of the content in the text input interface of the intelligent assistant may be displayed.
By way of example, a complete text input interface may include, for example, the name of an intelligent assistant (e.g., "XX assistant"), a text input box, and a partial prompt/welcome (e.g., "hello, what can help you. When the terminal detects that the vertical distance between the sliding position and the lower side of the screen is greater than or equal to the second threshold value and less than the first threshold value in the process of executing the first sliding operation, the displayable content may be, for example, the name of an intelligent assistant, such as an "XX assistant". Therefore, in the process of executing the first sliding operation, when the sliding position of the finger reaches the second threshold, the name of the intelligent assistant (such as a word of the XX assistant) can be displayed first, and when the sliding position of the finger reaches the first threshold, a complete word input interface can be displayed, so that the dynamic effect is beneficial to improving the experience of the user.
The second threshold may be, for example, a preconfigured value, or may be further customized by the user according to personal habits, which is not limited in the embodiments of the present application. In one example, the second threshold may be, for example, 1 centimeter.
In some embodiments, the method may further comprise: receiving a second operation from the user; and closing the text input interface in response to the second operation.
As one implementation, the second operation may include: and (5) touch operation. The position of the touch point of the touch operation on the screen of the terminal is located outside a text input interface of the intelligent assistant. That is, in the case where a text input interface of the intelligent assistant has been displayed on the screen, the user may close the text input interface by clicking on an area outside the text input interface on the screen.
As another implementation, the second operation may include: and a second sliding operation towards the side of the screen where the navigation bar is located. The starting position of the second sliding operation may be located within the text input interface of the intelligent assistant (including the border of the text input interface).
In some embodiments, the method may further comprise: and executing a third operation based on the text received by the text input interface. Illustratively, the third operation may include: displaying feedback information, wherein the feedback information is feedback aiming at the content of the received characters; and/or executing the instruction indicated by the received text.
In one example, the third operation includes displaying feedback information. That is, after receiving text from a user through the text input interface of the intelligent assistant, the terminal may feed back the text, and may display feedback information on a screen (e.g., may be displayed on the text input interface). For example, if the text received by the text input interface is "please help me query for nearby food", the terminal may display the queried nearby food information on a screen (e.g., may be displayed on the text input interface).
For another example, the third operation includes executing instructions indicated by the text. That is, after the terminal receives the text from the user through the text input interface of the intelligent assistant, the terminal can analyze the instruction indicated by the text and execute the instruction. For example, the instruction indicated by the text is "turn on the camera", and then the terminal may start the application program corresponding to the camera based on the instruction, or the terminal may display a card containing the camera icon on the screen (for example, on the text input interface), and if the user clicks the camera icon on the card, the terminal may start the application program corresponding to the camera.
It will be appreciated that the third operation described above as being performed by the terminal may also be considered to be performed by the terminal's intelligent assistant (or intelligent assistant module).
According to the method of the embodiment, the user can directly call the text input function of the intelligent assistant without calling the voice interface of the intelligent assistant, so that the path of the user using the function is shortened, and convenience is improved.
It should be noted that, the method for calling the text input function of the intelligent assistant based on the first operation of the navigation bar by the user provided in the embodiment of the present application may also be extended to a scenario for calling other functions. For example, in some application scenarios, a voice function of the intelligent assistant may be evoked, or other shortcut functions may be evoked, based on a first operation of the navigation bar by the user.
In some embodiments, in the event that the text input function of the intelligent assistant is not activated, the method may further comprise: receiving a third sliding operation of the navigation bar from a user; in response to the third sliding operation, a task management function is evoked. That is, in the event that the text input function of the intelligent assistant is not activated (e.g., before the terminal receives the first operation; again, e.g., after the terminal has returned to a state in which the text input function of the intelligent assistant is not activated), the terminal may evoke the task management function in response to the third sliding operation. The task management function is also understood to be a task management interface opening, so that a user can manage the task in the terminal operation through the task management interface.
According to the method of the embodiment, in the case that the text input function of the intelligent assistant is activated, the terminal can call the text input function of the intelligent assistant based on the first operation of the navigation bar by the user; in the case where the text input function of the intelligent assistant is not activated, the terminal may call the task management function based on a third sliding operation of the navigation bar by the user. The method can realize different functions by utilizing the navigation bar on the terminal screen, thereby increasing the practicability of the navigation bar and achieving the purpose of fully utilizing the navigation bar.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, a procedure for calling a text input function of an intelligent assistant will be described below with reference to an example.
Fig. 4 is a schematic flow chart of a text input function for calling an intelligent assistant according to an embodiment of the present application.
Wherein the navigation bar may be located, for example, on the underside of the terminal screen. As shown in fig. 4, the flow of invoking the text input function of the intelligent assistant may include:
s401, long-press activating a text input function of the intelligent assistant.
For example, in a system global scenario (including a lock screen scenario, an off screen display scenario, and a desktop scenario), the text input function of the intelligent assistant may be activated by pressing the system navigation bar for 0.5 seconds. After the text input function of the intelligent assistant is activated, the brightness of the navigation bar can be changed from darker brightness to brighter brightness.
S402, the intelligent assistant panel is displayed in an upward sliding mode.
After the text entry function of the intelligent assistant is activated, performing a slide-up operation (e.g., sliding up to 1 cm) may display an intelligent assistant panel, which may include, for example, the name of the intelligent assistant, such as "XX assistant". As shown in fig. 4, the intelligent assistant panel may be displayed at the bottom of the terminal screen, for example.
S403, continuously sliding up to display a text input interface of the intelligent assistant.
On the basis of S402, if the user continues to slide up, a text input interface of the intelligent assistant may be displayed. As shown in fig. 4, the text input interface of the intelligent assistant may be displayed at the bottom of the terminal screen, for example. The text input interface may include, for example, the name of the intelligent assistant (e.g., "XX assistant"), a text input box, and a portion of a prompt/welcome (e.g., "hello, what can help you.
According to the method of the embodiment, in the global system scene, the navigation bar (or the text input function of the intelligent assistant) can be activated by long-pressing the navigation bar of the system for 0.5 seconds; then, by sliding upwards, the intelligent assistant panel can be pulled up, and the state of the intelligent assistant panel changes in real time along with the sliding height of the fingers of the user during the sliding upwards. For example, in S402, the intelligent assistant panel may include the name of the intelligent assistant, and in S403, the intelligent assistant panel may include a complete text input interface. Through the technical means, the blank of the mode of calling the intelligent assistant through the fixed resident entrance in the screen in the global scene of the mobile phone system is made up, so that a user can use the terminal intelligent assistant more conveniently, and the space at the bottom of the terminal navigation bar and the terminal screen is fully utilized. In addition, the method of the embodiment can directly call the text input state of the intelligent assistant without calling the voice interface of the intelligent assistant, so that the path of the user using the text input function of the intelligent assistant is shortened. Meanwhile, the intelligent assistant following the user operation arouses the effect, so that the user experience is more novel.
Based on the foregoing embodiments, embodiments of the present application provide corresponding devices for evoking an intelligent assistant. The apparatus comprising modules/units comprised therein, and sub-modules/sub-units comprised therein, may be implemented by a processor in a computer device having information processing capabilities; of course, the method can also be realized by a specific logic circuit; in practice, the processor may be a central processing unit (Central Processing Unit, CPU), microprocessor (Microprocessor Unit, MPU), digital signal processor (Digital Signal Processor, DSP) or field programmable gate array (Field Programmable Gate Array, FPGA), etc.
Fig. 5 shows the composition of an apparatus 500 for evoking an intelligent assistant according to an embodiment of the present application. As shown in fig. 5, the apparatus 500 may include: a first display unit 501 for displaying a navigation bar on a screen side of the device 500, wherein the functions of the navigation bar include a task management function and a text input function of an intelligent assistant; the task management function is used for managing tasks in the running process of the device 500 by a user, and the text input function is used for inputting text to the device 500 by the user so as to interact with an intelligent assistant of the device 500; a first receiving unit 502, configured to receive a first operation of the navigation bar by a user; a call-out unit 503 for calling out a text input function in response to the first operation.
In some embodiments, the first operation comprises: a long press operation and a first sliding operation; the arousal unit 503 includes: an activating unit for activating a text input function in response to a long press operation; and a display sub-unit for displaying a text input interface of the intelligent assistant in response to the first sliding operation in case that the text input function is activated, the text input interface being used for a user to input text to the device 500 to interact with the intelligent assistant of the device 500.
In some embodiments, the display subunit is specifically configured to: and displaying a text input interface when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is greater than or equal to a first threshold value.
In some embodiments, the apparatus 500 further comprises: and the second display unit is used for displaying part of contents in the text input interface when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is larger than or equal to a second threshold value and smaller than the first threshold value under the condition that the text input function is activated.
In some embodiments, the apparatus 500 further comprises: a second receiving unit for receiving a second operation from the user; the closing unit is used for responding to the second operation and closing the text input interface; the second operation includes: touch operation, wherein the position of a touch point of the touch operation on the screen of the device 500 is located outside the text input interface; or, a second sliding operation facing the side of the screen, wherein the starting position of the second sliding operation is positioned in the text input interface.
In some embodiments, the apparatus 500 further comprises: the execution unit is used for executing a third operation based on the characters received by the character input interface, and the third operation comprises: displaying feedback information, wherein the feedback information is feedback aiming at the content of the characters; and/or executing the instructions indicated by the text.
In some embodiments, the apparatus 500 further comprises: and a brightness changing unit for changing the brightness of the navigation bar from a first brightness to a second brightness, the second brightness being greater than the first brightness, in the case that the text input function is activated.
In some embodiments, the first display unit 501 is specifically configured to: in the case where the device 500 is in any one of the lock screen state, the off screen display state, and the desktop state, a navigation bar is displayed on the screen side of the device 500.
The description of the apparatus embodiments above is similar to that of the method embodiments above, with similar advantageous effects as the method embodiments. In some embodiments, functions or modules included in the apparatus provided in the embodiments of the present application may be used to perform the methods described in the embodiments of the methods, and for technical details that are not disclosed in the embodiments of the apparatus of the present application, please refer to the description of the embodiments of the methods of the present application for understanding.
It should be noted that, in the embodiment of the present application, if the method is implemented in the form of a software functional module, and sold or used as a separate product, the method may also be stored in a computer readable storage medium. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or portions contributing to the related art, and the software product may be stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific hardware, software, or firmware, or to any combination of hardware, software, and firmware.
The embodiment of the application also provides equipment for calling the intelligent assistant, which comprises a memory and a processor, wherein the memory stores a computer program capable of running on the processor, and the processor realizes part or all of the steps in the method when executing the program.
The embodiment of the application also provides a chip. The chip comprises: and a processor for calling and running the computer program from the memory, so that the device on which the chip is mounted performs part or all of the steps of the above method.
The embodiment of the application also provides another chip, which comprises a processor and a communication interface, wherein the processor reads instructions stored in a memory through the communication interface, so that part or all of the steps in the method are realized. In some embodiments, as an implementation manner, the chip further includes a memory, where a computer program or an instruction is stored in the memory, and the processor is configured to execute the computer program or the instruction stored in the memory, and when the computer program or the instruction is executed, the processor is configured to execute some or all of the steps in the above method.
Embodiments of the present application also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs some or all of the steps of the above-described method. The computer readable storage medium may be transitory or non-transitory.
Embodiments of the present application also provide a computer program comprising computer readable code which, when run in a device (such as a terminal), causes a processor in the device to perform some or all of the steps of the method described above.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program which, when read and executed by a computer, performs some or all of the steps of the above-described method. The computer program product may be realized in particular by means of hardware, software or a combination thereof. In some embodiments, the computer program product is embodied as a computer storage medium, in other embodiments the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), or the like.
It should be noted here that: the above description of various embodiments is intended to emphasize the differences between the various embodiments, the same or similar features being referred to each other. The above description of the apparatus, chip, storage medium, computer program and computer program product embodiments is similar to that of the method embodiments described above, with similar advantageous effects as the method embodiments. For technical details not disclosed in the embodiments of the apparatus, chip, storage medium, computer program and computer program product of the present application, please refer to the description of the method embodiments of the present application for understanding.
Fig. 6 is a schematic diagram of a hardware entity of a device for calling an intelligent assistant, which may be a terminal, according to an embodiment of the present application. The device 600 that invokes the intelligent assistant shown in fig. 6 (hereinafter simply referred to as device 600) includes a processor 610, and the processor 610 may call and run a computer program from memory to implement the methods in embodiments of the present application.
In some embodiments, as shown in fig. 6, device 600 may also include memory 620. Wherein the processor 610 may call and run a computer program from the memory 620 to implement the methods in embodiments of the present application. The memory 620 may be a separate device from the processor 610 or may be integrated into the processor 610.
In some embodiments, as shown in fig. 6, the device 600 may further include a transceiver 630, and the processor 610 may control the transceiver 630 to communicate with other devices, and in particular, may transmit information or data to other devices, or receive information or data transmitted by other devices. The transceiver 630 may include a transmitter and a receiver, among others. Transceiver 630 may further include antennas, the number of which may be one or more.
In some embodiments, the device 600 may be a terminal in the embodiments of the present application, and the device 600 may implement corresponding flows implemented by the terminal in the methods in the embodiments of the present application, which are not described herein for brevity.
It should be appreciated that reference throughout this specification to "one embodiment," "an embodiment," or "some embodiments" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment," "in an embodiment," or "in some embodiments" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. It should be understood that, in various embodiments of the present application, the sequence number of each step/process described above does not mean that the execution sequence of each step/process should be determined by the function and the internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application. The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above described device embodiments are only illustrative, e.g. the division of the units or modules is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units; can be located in one place or distributed to a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by hardware related to program instructions, and the foregoing program may be stored in a computer readable storage medium, where the program, when executed, performs steps including the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
Alternatively, the integrated units described above may be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the related art in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a mobile phone, a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a removable storage device, a ROM, a magnetic disk, or an optical disk.
The foregoing is merely an embodiment of the present application, but the protection scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered in the protection scope of the present application.

Claims (12)

1. A method of evoking an intelligent assistant, for application to a terminal, the method comprising:
Displaying a navigation bar on the side of a screen of the terminal, wherein the functions of the navigation bar comprise a task management function and a character input function of an intelligent assistant; the task management function is used for managing tasks in the operation of the terminal by a user, and the text input function is used for inputting text to the terminal by the user so as to interact with an intelligent assistant of the terminal;
receiving a first operation of a user on the navigation bar;
and in response to the first operation, invoking the text input function.
2. The method of claim 1, wherein the first operation comprises: a long press operation and a first sliding operation; the invoking the text input function in response to the first operation includes:
responding to the long-press operation, and activating the text input function;
and under the condition that the text input function is activated, responding to the first sliding operation, displaying a text input interface of the intelligent assistant, wherein the text input interface is used for inputting text to the terminal by a user so as to interact with the intelligent assistant of the terminal.
3. The method of claim 2, wherein displaying a text input interface of the intelligent assistant in response to the first sliding operation comprises:
And displaying the text input interface when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is greater than or equal to a first threshold value.
4. A method according to claim 3, wherein in case the text input function is activated, the method further comprises:
and when the vertical distance between the sliding position of the first sliding operation and the side edge of the screen is larger than or equal to a second threshold value and smaller than the first threshold value, displaying part of the content in the text input interface.
5. The method according to any one of claims 2 to 4, further comprising:
receiving a second operation from the user;
closing the text input interface in response to the second operation; the second operation includes:
the touch operation is performed, and the position of a touch point of the touch operation on the screen of the terminal is positioned outside the text input interface; or,
and a second sliding operation facing the side of the screen, wherein the starting position of the second sliding operation is positioned in the text input interface.
6. The method according to any one of claims 2 to 4, further comprising:
Executing a third operation based on the text received by the text input interface, wherein the third operation comprises:
displaying feedback information, wherein the feedback information is feedback aiming at the content of the characters; and/or executing the instruction indicated by the text.
7. The method according to any one of claims 2 to 4, further comprising:
and when the text input function is activated, changing the brightness of the navigation bar from a first brightness to a second brightness, wherein the second brightness is larger than the first brightness.
8. The method according to any one of claims 1 to 4, wherein displaying a navigation bar on a screen side of the terminal comprises:
and displaying the navigation bar on the side of the screen of the terminal under the condition that the terminal is in any one of a screen locking state, a screen-off display state and a desktop state.
9. An apparatus for evoking an intelligent assistant, the apparatus comprising:
the first display unit is used for displaying a navigation bar on the side edge of a screen of the device, and the functions of the navigation bar comprise a task management function and a text input function of an intelligent assistant; the task management function is used for managing tasks in the running process of the device by a user, and the text input function is used for inputting text to the device by the user so as to interact with an intelligent assistant of the device;
A first receiving unit, configured to receive a first operation of the navigation bar by a user;
and a calling unit for calling the text input function in response to the first operation.
10. A device that invokes an intelligent assistant, the device comprising:
a memory for storing computer executable instructions;
a processor, coupled to the memory, for implementing the method of any one of claims 1 to 8 by executing the computer-executable instructions.
11. A chip, the chip comprising:
a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method of any one of claims 1 to 8.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by at least one processor, implements the method of any one of claims 1 to 8.
CN202311397744.7A 2023-10-25 2023-10-25 Method, device, equipment, chip and storage medium for calling intelligent assistant Pending CN117453090A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311397744.7A CN117453090A (en) 2023-10-25 2023-10-25 Method, device, equipment, chip and storage medium for calling intelligent assistant

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311397744.7A CN117453090A (en) 2023-10-25 2023-10-25 Method, device, equipment, chip and storage medium for calling intelligent assistant

Publications (1)

Publication Number Publication Date
CN117453090A true CN117453090A (en) 2024-01-26

Family

ID=89582885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311397744.7A Pending CN117453090A (en) 2023-10-25 2023-10-25 Method, device, equipment, chip and storage medium for calling intelligent assistant

Country Status (1)

Country Link
CN (1) CN117453090A (en)

Similar Documents

Publication Publication Date Title
US10743160B2 (en) Notification message management method, and terminal
EP3454197B1 (en) Method, device, and non-transitory computer-readable storage medium for switching pages of applications in a terminal device
US20190369868A1 (en) Terminal control method, device and computer readable storage medium
CN109062479B (en) Split screen application switching method and device, storage medium and electronic equipment
US11256525B2 (en) Object starting method and device
US10775979B2 (en) Buddy list presentation control method and system, and computer storage medium
US11704001B2 (en) Method and device for displaying web page content
EP4209877A1 (en) Window display method and device
CN108614655B (en) Split screen display method and device, storage medium and electronic equipment
CN105867728B (en) A kind of man-machine interface display system and method
CN111381737B (en) Dock display method and device and storage medium
CN106020698A (en) Mobile terminal and realization method of single-hand mode
US20230152956A1 (en) Wallpaper display control method and apparatus and electronic device
WO2022022566A1 (en) Graphic code identification method and apparatus and electronic device
US20180364893A1 (en) Icon processing method and apparatus for applications
CN112445407A (en) Display method and electronic device
CN112882779A (en) Lock screen display control method and device, mobile terminal and storage medium
CN113360062A (en) Display control method and device, electronic equipment and readable storage medium
WO2022242586A1 (en) Application interface method and apparatus, and electronic device
CN113885750A (en) Message processing method and device and electronic equipment
CN110691167A (en) Control method and device of display unit
CN117453090A (en) Method, device, equipment, chip and storage medium for calling intelligent assistant
CN109976549B (en) Data processing method, device and machine readable medium
CN113360051A (en) Search prompting method and device, mobile terminal and storage medium
WO2023071718A1 (en) Floating window adjusting method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination