WO2019119325A1 - 一种控制方法及装置 - Google Patents

一种控制方法及装置 Download PDF

Info

Publication number
WO2019119325A1
WO2019119325A1 PCT/CN2017/117585 CN2017117585W WO2019119325A1 WO 2019119325 A1 WO2019119325 A1 WO 2019119325A1 CN 2017117585 W CN2017117585 W CN 2017117585W WO 2019119325 A1 WO2019119325 A1 WO 2019119325A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
application
button
display
user
Prior art date
Application number
PCT/CN2017/117585
Other languages
English (en)
French (fr)
Inventor
倪静
钱凯
杨之言
徐镜进
刘石
周煜啸
朱勇刚
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2017/117585 priority Critical patent/WO2019119325A1/zh
Priority to CN201780089422.2A priority patent/CN110494835A/zh
Priority to US16/956,663 priority patent/US11416126B2/en
Publication of WO2019119325A1 publication Critical patent/WO2019119325A1/zh
Priority to US17/862,816 priority patent/US20230004267A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Definitions

  • the present application relates to the field of terminal technologies, and in particular, to a control method and apparatus.
  • the setting icon can be found on the main interface of the mobile phone, and then the setting interface is accessed by clicking the setting icon, and the control switch of the voice input function is found in the setting interface, and the voice input function is turned on later. .
  • the user can then invoke functions such as making a call by entering the specified voice password.
  • the user can use the same operation mode to turn off the voice input function.
  • the step-by-step search and click operation can reduce the user experience, and the above operation is too cumbersome and difficult to grasp for a user who is unfamiliar with the above operation.
  • the embodiment of the invention provides a control method and device, which can solve the problem that the artificial intelligence function such as the voice input function is too cumbersome in the calling process.
  • an embodiment of the present invention provides a control method, which is performed by an electronic device.
  • the method includes: displaying a first interface; receiving a first input of a user acting on a non-navigation button; and in response to the first input, displaying at least one of an AI function portal interface and a scene service task interface corresponding to the non-navigation button.
  • the first interface includes a navigation bar, and the navigation bar is provided with a navigation key and at least one non-navigation button, wherein the navigation key is used when the electronic device performs the return to the previous interface, jumps to the main interface, and the callout deadline when triggered.
  • the at least one non-navigation button is configured to execute at least one of the display AI function portal interface and the scene service task interface when triggered by the electronic device.
  • the non-navigation button since the non-navigation button is set in the navigation bar, the user can act on the non-navigation button by using a non-navigation button in the embodiment of the present invention.
  • Trigger displays the AI function entry interface and/or the scene service task interface.
  • the user can use the global display function of the navigation bar in various interfaces such as the main interface and the application running interface, and can generally implement non-navigation buttons in any application scenario to reduce the AI function entrance interface or the scene service task interface.
  • Calling difficulty which solves AI functions such as voice input functions, is too cumbersome to operate during the call.
  • At least one non-navigation button is a button. Then, in response to the first input, displaying at least one of an AI function entry interface and a scene service task interface corresponding to the non-navigation button may be implemented to display an AI function entry interface corresponding to the non-navigation button and in response to the first input.
  • Scene service task interface It can be seen that the user can perform different operations on the above one button to achieve the effects of respectively displaying the AI function portal interface and the scene service task interface; or, the user can perform operations on the above one button to achieve simultaneous display of the AI function. The effect of the portal interface and the scene service task interface.
  • the setting of a single non-navigation button allows the user to call the above two interfaces at the same time, or call different interfaces at different times by different operations acting on the single non-navigation button. It should be noted that the setting of a single non-navigation button further saves space in the navigation bar when the AI function entry interface and/or the scene service task interface can be invoked.
  • At least one non-navigation button is two buttons.
  • Receiving a first input of the user acting on the non-navigation button; displaying at least one of the AI function entry interface and the scene service task interface corresponding to the non-navigation button in response to the first input may be implemented as: receiving the user acting on the first a second input of a button, in response to the second input, displaying an AI function entry interface corresponding to the first button; receiving a third input of the user acting on the second button, and displaying a corresponding to the second button in response to the third input Scene service task interface.
  • the second input and the third input may be the same or different.
  • the second input and the third input include but are not limited to a click, a double click, a long press, a left sliding, a right sliding, a pressure, and a floating operation.
  • a click a double click
  • a long press a left sliding
  • a right sliding a pressure
  • a floating operation a floating operation.
  • the purpose of setting two non-navigation buttons is to enable different user interfaces to be triggered when the user acts on different non-navigation buttons.
  • the AI function entry interface in response to the second input, displaying an AI function entry interface corresponding to the first button, the AI function entry interface may be displayed on the first interface in response to the second input.
  • the display of the AI function entrance interface can adopt a floating display manner, for example, a floating window is popped up on the first interface.
  • the layout of the first interface currently being displayed is not changed, but is overlaid on the first interface, and the AI function entry interface is presented to the user to facilitate the user to invoke the AI function.
  • the content displayed in the first interface is selectively recommended to the user.
  • the above floating display mode is easier for the user to operate.
  • the user can dynamically adjust the size and position of the suspended AI function portal interface, and even adjust the transparency of the interface during the presentation process, which is not limited herein.
  • the scene service task interface corresponding to the second button is displayed in response to the third input
  • the first interface switch is displayed as the scene service task interface in response to the third input.
  • the interface switching mode may be adopted. The first interface currently presented to the user is switched to the scene service task interface for the user to access.
  • the first interface is a first application interface.
  • displaying the AI function entry interface corresponding to the first button may be implemented as: in response to receiving the preset operation of the first button on the navigation bar of the first application interface by the user, in the first application
  • the first recommendation information is displayed on the interface, where the first recommendation information is determined by the AI according to one or more display objects displayed on the first application interface, wherein the display object is at least one of text, voice or image information.
  • the displaying the first recommendation information on the first application interface is specifically: at least one of: displaying the first recommendation information in an input box of the first application interface;
  • the first recommendation information is displayed on the suspension;
  • the interface of the first application interface is modified, and the first recommendation information is displayed on the modified first application interface.
  • displaying the first recommendation information in the input box of the first application interface can effectively save the time for the user to edit the reply content when replying to the message.
  • the mobile phone may extract one or more keywords in the current display content through a processing method such as semantic analysis, and then combine the extracted keywords with the content in the existing database to selectively recommend the user to the user.
  • the first recommendation information is at least one of a network address link, a text, a picture, or an expression.
  • the mobile phone can push a variety of recommendation information to the user for the user to directly reply to the message.
  • the first recommendation information is a network address link
  • the method further includes: in response to the user performing a preset operation on the network address link, in the first An application interface displays the content pointed to by the network address link.
  • the content pointed to by the network address link can be presented in the current interface.
  • the first application interface is a framing interface.
  • the first recommendation information is information corresponding to one or more display objects displayed on the first application interface, and the display object is image information.
  • the user uses the mobile phone to shoot the surrounding environment.
  • the mobile phone can automatically recognize the current shooting process as the framing interface.
  • the display object presented in the finder interface can be used for the mobile phone to determine the first. Recommended information.
  • the mobile phone can recognize the display object through functions such as screen recognition, and complete functions such as search and push related to the recognition result based on the recognition result.
  • the AI function portal interface further includes at least one of voice, image, and text search, and save function buttons.
  • displaying an AI function entry interface corresponding to the non-navigation button may be implemented as: performing semantic analysis on the content on the first interface in response to the first input, extracting one Or multiple keywords to display the AI function portal interface containing specific information.
  • the specific information is information corresponding to the extracted keyword.
  • the scenario service task interface includes: displaying, at a first time, a shortcut of the third application on the first preset location of the scenario service task interface, and responding to receiving the user to the third
  • the preset operation of the shortcut of the application displays the interface corresponding to the third application on the scene service task interface; at the second time, the shortcut of the fourth application is displayed on the first preset position of the scene service task interface And in response to receiving a preset operation of the shortcut of the fourth application by the user, displaying an interface corresponding to the fourth application on the scene service task interface.
  • the third application and the fourth application are determined by the electronic device according to the user usage habit; the first time is different from the second time, and the third application is different from the fourth application.
  • the scene service task interface may change more or less depending on the scene.
  • the premise of the update of the scenario service task interface includes, but is not limited to, the change of the time, and the change of the location of the device, the change of the reminder, and the like, which are not limited herein.
  • the trigger button of the scene service task interface that is, the second button
  • the content corresponding to the third application is displayed on the second button at the first time; at the second time, the second button
  • the content corresponding to the fourth application is displayed on the top.
  • the second button will also change, thereby more effectively prompting the user to present the content presented by the current scene service task interface.
  • the first interface is a main interface, and the first interface further includes a docking Dock area, and the Dock area is used to place an application shortcut.
  • the navigation bar and the Dock area belong to two functional areas that are located at different positions on the display interface.
  • the navigation bar has a global display function compared to the Dock area.
  • an embodiment of the present invention provides a control apparatus.
  • the device can implement the functions implemented in the foregoing method embodiments, and the functions can be implemented by using hardware or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules corresponding to the above functions.
  • an embodiment of the present invention provides a terminal.
  • the structure of the terminal includes a display screen, a memory, one or more processors, a plurality of applications, and one or more programs; wherein the one or more programs are stored in the memory; the one Or the plurality of processors, when executing the one or more programs, causing the terminal to implement the method of any of the first aspect and its various possible designs.
  • an embodiment of the present invention provides a readable storage medium, including instructions.
  • the terminal is caused to perform the method of any of the above first aspects and its various possible designs.
  • an embodiment of the present invention provides a computer program product, the computer program product comprising software code for performing the method of any of the above first aspects and various possible designs thereof.
  • an embodiment of the present invention provides a graphics or user interface for performing the method of any of the above first aspects and various possible designs thereof.
  • FIG. 1 is a schematic structural diagram of a first terminal according to an embodiment of the present disclosure
  • FIG. 2(a) is a schematic diagram of a first display interface according to an embodiment of the present invention.
  • FIG. 2(b) is a schematic diagram of a second display interface according to an embodiment of the present invention.
  • FIG. 3(a) is a schematic diagram of a first navigation bar according to an embodiment of the present invention.
  • FIG. 3(b) is a schematic diagram of a third display interface according to an embodiment of the present invention.
  • FIG. 4(a) is a schematic diagram of a second navigation bar according to an embodiment of the present invention.
  • 4(b) is a schematic diagram of a fourth display interface according to an embodiment of the present invention.
  • FIG. 5(a) is a schematic diagram of a third navigation bar according to an embodiment of the present invention.
  • FIG. 5(b) is a schematic diagram of a fifth display interface according to an embodiment of the present invention.
  • FIG. 6(a) is a schematic diagram of a sixth display interface according to an embodiment of the present invention.
  • FIG. 6(b) is a schematic diagram of a seventh display interface according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of an eighth display interface according to an embodiment of the present invention.
  • FIG. 8(a) is a schematic diagram of a fifth display interface according to an embodiment of the present invention.
  • FIG. 8(b) is a schematic diagram of a sixth display interface according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a seventh display interface according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of an eighth display interface according to an embodiment of the present disclosure.
  • FIG. 11(a) is a schematic diagram of a ninth display interface according to an embodiment of the present invention.
  • FIG. 11(b) is a schematic diagram of a tenth display interface according to an embodiment of the present invention.
  • FIG. 12 is a schematic structural diagram of a control apparatus according to an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of a second terminal according to an embodiment of the present invention.
  • 210 - a button for triggering display of an AI function entry interface and a scene service task interface
  • the embodiment of the present invention can be applied to a terminal (ie, an electronic device), which can be a notebook computer, a smart phone, a virtual reality (VR) device, an augmented reality (AR), an in-vehicle device, or an intelligent device. Devices such as wearable devices.
  • the terminal can be configured with at least a display screen, an input device, and a processor.
  • the terminal 100 is exemplified. As shown in FIG. 1 , the terminal 100 includes a processor 101, a memory 102, a camera 103, an RF circuit 104, and an audio circuit 105.
  • Components such as a speaker 106, a microphone 107, an input device 108, other input devices 109, a display screen 110, a touch panel 111, a display panel 112, an output device 113, and a power source 114.
  • the display screen 110 is composed of at least a touch panel 111 as an input device and a display panel 112 as an output device.
  • the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than those illustrated, or combine some components, or split some components, or different. The component arrangement is not limited herein.
  • the components of the terminal 100 will be specifically described below with reference to FIG. 1 :
  • the radio frequency (RF) circuit 104 can be used for receiving and transmitting information during the transmission or reception of information or during a call. For example, if the terminal 100 is a mobile phone, the terminal 100 can receive the downlink information sent by the base station through the RF circuit 104. Thereafter, it is transmitted to the processor 101 for processing; in addition, data related to the uplink is transmitted to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 104 can also communicate with the network and other devices via wireless communication.
  • the wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 102 can be used to store software programs and modules, and the processor 101 executes various functional applications and data processing of the terminal 100 by running software programs and modules stored in the memory 102.
  • the memory 102 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (for example, a sound playing function, an image playing function, etc.); and the storage data area may be Data (such as audio data, video data, etc.) created in accordance with the use of the terminal 100 is stored.
  • memory 102 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • Other input devices 109 can be used to receive input numeric or character information, as well as to generate key signal inputs related to user settings and function control of terminal 100.
  • other input devices 109 may include, but are not limited to, a physical keyboard, function keys (eg, volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, light rats (light mice are touches that do not display visual output)
  • function keys eg, volume control buttons, switch buttons, etc.
  • trackballs mice
  • mice joysticks
  • light rats light mice are touches that do not display visual output
  • One or more of a sensitive surface, or an extension of a touch sensitive surface formed by a touch screen may also include sensors built into the terminal 100, such as gravity sensors, acceleration sensors, etc., and the terminal 100 may also use parameters detected by the sensors as input data.
  • the display screen 110 can be used to display information input by the user or information provided to the user as well as various menus of the terminal 100, and can also accept user input.
  • the display panel 112 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 111 is also called a touch screen or a touch sensitive screen.
  • the contact or non-contact operation of the user on or near the user may be collected (for example, the user may use any suitable object or accessory such as a finger or a stylus on the touch panel 111 or in the vicinity of the touch panel 111, or Including the somatosensory operation; the operation includes a single point control operation, a multi-point control operation and the like, and drives the corresponding connection device according to a preset program.
  • the touch panel 111 may further include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation and posture of the user, and detects a signal brought by the touch operation, and transmits a signal to the touch controller; the touch controller receives the touch information from the touch detection device, and converts the signal into the processor 101.
  • the information that can be processed is transmitted to the processor 101, and the commands sent from the processor 101 can also be received and executed.
  • the touch panel 111 can be implemented by using various types such as resistive, capacitive, infrared, and surface acoustic waves, and the touch panel 111 can be implemented by any technology developed in the future.
  • the touch panel 111 can cover the display panel 112, and the user can cover the display panel 112 according to the content displayed by the display panel 112 (including but not limited to a soft keyboard, a virtual mouse, a virtual button, an icon, etc.).
  • the touch panel 111 operates on or near the touch panel 111. After detecting the operation thereon or nearby, the touch panel 111 transmits to the processor 101 to determine the user input, and then the processor 101 provides the display panel 112 according to the user input. Corresponding visual output.
  • the touch panel 111 and the display panel 112 are used as two independent components to implement the input and output functions of the terminal 100 in FIG. 1, in some embodiments, the touch panel 111 may be integrated with the display panel 112. To implement the input and output functions of the terminal 100.
  • the RF circuit 104, the speaker 106, and the microphone 107 provide an audio interface between the user and the terminal 100.
  • the audio circuit 105 can transmit the converted audio data to the speaker 106 for conversion to the sound signal output.
  • the microphone 107 can convert the collected sound signal into a signal, which is received by the audio circuit 105.
  • the audio data is then converted to audio data, and the audio data is output to the RF circuit 104 for transmission to a device such as another terminal, or the audio data is output to the memory 102 for the processor 101 to perform further processing in conjunction with the content stored in the memory 102.
  • the camera 103 can acquire image frames in real time and transmit them to the processor 101 for processing, and store the processed results to the memory 102 and/or present the processed results to the user via the display panel 112.
  • the processor 101 is the control center of the terminal 100, connecting various portions of the entire terminal 100 using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 102, and recalling data stored in the memory 102.
  • the various functions and processing data of the terminal 100 are executed to perform overall monitoring of the terminal 100.
  • the processor 101 may include one or more processing units; the processor 101 may further integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface (User Interface, UI) And the application, etc., the modem processor mainly handles wireless communication. It can be understood that the above modem processor may not be integrated into the processor 101.
  • the terminal 100 may further include a power source 114 (for example, a battery) for supplying power to the respective components.
  • a power source 114 for example, a battery
  • the power source 114 may be logically connected to the processor 101 through the power management system, thereby managing charging, discharging, and the like through the power management system. And power consumption and other functions.
  • the terminal 100 may further include a Bluetooth module, a sensor, and the like, and details are not described herein.
  • the technical solution provided by the embodiment of the present invention is described below by using the terminal 100 as a mobile phone as an example.
  • a status bar 201, a system area 202, a Dock area 203, and a navigation bar 204 are included in the display interface of the mobile phone.
  • the page mark 205 is located in the system area 202
  • the button 207 and the button 208 can be regarded as one possible implementation form of the non-navigator button.
  • the buttons 210 mentioned below for triggering the display of the AI function portal interface and the scene service task interface may also be considered as one possible implementation of the non-navigation button.
  • the system area 202 is used to display an icon of an application that the mobile phone has installed, and a folder.
  • the Dock area 203 is for displaying an icon of an application that the user desires to be viewable on each page of the main page.
  • the navigation bar 204 can normally be displayed in any one of the display interfaces, that is, when the user accesses any interface, the navigation bar 204 can be seen in the interface being accessed, and the button on the navigation bar 204 is triggered to cause the mobile phone to execute. Corresponding function.
  • the user can click the navigation button 206 to trigger the mobile phone to return to the previous interface from the current display interface, or by long pressing the navigation button to trigger the mobile phone to present the main screen interface, or by sliding the navigation key to the left or right. , trigger the phone to present applications that have recently been accessed, etc.
  • the navigation key 206 can also be three navigation buttons, respectively corresponding to return to the previous interface, return to the main screen and display the recently accessed application; the navigation key 206 can also be two navigation buttons, through different operations, such as clicking, double-clicking , long press or pressure, suspension operation, etc. to achieve the above three functions.
  • the recently accessed application refers to the application accessed within the preset time period up to the current time, or can be understood as referring to the mobile phone after the last power-on, that is, after the current power-on, until the current time. , all applications running in the foreground and background.
  • the three functions corresponding to the navigation keys are prior art and will not be described here.
  • button 207 and the button 208 are disposed in the navigation bar 204, the button 207 and the button 208 are similar to the navigation button 206, and have a global display function, that is, no matter which display interface the mobile phone is currently in, as long as it exists in the display interface.
  • Navigation bar 204, then button 207 and button 208 are displayed simultaneously with navigation key 206.
  • the button 207 and the button 208 can be displayed as long as the navigation bar 204 can be displayed in the display interface.
  • the navigation bar 204 includes a single navigation key, that is, the navigation key 206 capable of triggering the multi-function, as an example, and the embodiment of the present invention is described.
  • the technical solution adopted by the embodiment of the present invention can also be adapted to navigation bars set in other manners, for example, three or two buttons are included in the navigation bar.
  • the space of the navigation bar 204 can be effectively saved, and the navigation bar 204 has sufficient space to place other buttons, for example, in the free area of the navigation bar 204.
  • a button 207 for triggering the display of the AI function portal interface and a button 208 for triggering the display of the scene service task interface may be provided.
  • buttons 207 and 208 are located on both sides of the navigation key 206, respectively, to make full use of the free area of the navigation bar 204, for example, the button 207 is located on the left side of the navigation key 206.
  • the button 208 is located on the right side of the navigation key 206.
  • the positions of the buttons 207 and 208 are not limited.
  • the two buttons are disposed on both sides of the navigation key 206, it can also be implemented as shown in FIG. 2(b).
  • the setting mode that is, the button 207 is located on the right side of the navigation key 206, and the button 208 is located on the left side of the navigation key 206.
  • the mobile phone can present the AI function entry interface including one or more AI function entries to the user, that is, the user can click, double click, and slide (ie, slide to the left, toward Start the button in the navigation bar by swiping right, swiping down, or sliding down, pressure, long press, large area gestures, and hovering touch.
  • the mobile phone can present the scene service task including one or more scene service tasks to the user for the user to access. That is, in the embodiment of the present invention, the second input of the user acting on the button 207 and the third input of the user acting on the button 208 may be the same or different.
  • AI functions users may be more inclined to use one of the AI functions, such as one of the sweep function, the search function, and the voice input function.
  • the above-mentioned AI function belongs to the current AI function with high frequency or high practicability.
  • one of the AI functions mentioned above is not limited to one of the above-mentioned exemplary functions, and may be considered by other users. It is a more common AI function. It may be a factory default setting, or may be implemented or changed by a user setting, or may be determined by using a user's habit analysis, etc., and is not limited herein.
  • the button 207 can provide the user with a one-click access to a single AI function, such as shown in Figures 3(a) and 3(b).
  • the button 207 is displayed as an icon of the sweep function, and the user can more intuitively understand the AI function that can be triggered after the button 207 is activated.
  • the mobile phone can pop up the floating window 209 to present the operation interface of the sweep function to the user.
  • the user can directly scan the code through the mobile phone (2D). Code, barcode, etc.) and code identification. In other words, the user invokes the sweep function by one step operation on the button 207 to facilitate user operation.
  • buttons 207 can enable the user to more intuitively understand an AI function that can be triggered by the button 207. At this time, the user can conveniently and quickly call up according to his own needs.
  • buttons 207 can be preset by the user, or the mobile phone can be set before leaving the factory.
  • the specific setting method will be mentioned later, and will not be described here.
  • the button 208 is used to trigger the scene service task, and the scene service task may change with the change of the scene. Therefore, in the embodiment of the present invention, there is a possibility that there is no matching scene service task in the current scene. Then in this case, the user clicking the button 208 is likely to bring up a blank interface or not to bring up any interface.
  • the matching rules of the current scene and the scene service task include, but are not limited to, the user selects one or more of the multiple scenarios provided by the mobile phone, or when one or more preset parameters of the mobile phone meet the preset scenario (condition)
  • the mobile phone can automatically display the corresponding scene icon to prompt the user for the scene service task available in the current scene, or provide the user with information related to the current scene.
  • the navigation bar 204 can be The button 208 is not presented to the user. At this point, button 208 may be considered to be hidden, or button 208 may not be present in navigation bar 204. Thus, for the user, the invalid interface is not called up by acting on the button 208.
  • the button 208 is hidden. It can be understood that the button 208 can be called in the navigation bar 204 by the user, or the button 208 can be automatically displayed in the navigation bar 204 when the button 208 has a corresponding non-blank interface. This is not limited here.
  • the button 208 selectively presents the content of the scene service task to the user.
  • scene service tasks include, but are not limited to, one or more of flights, trains, hotels, destination friends, destination recommendations, break reminders, meetings, express delivery, sports health, data traffic reports, and mobile phone usage.
  • button 208 when presented, is displayed as an icon that includes an airplane graphic.
  • the user can intuitively understand that there is currently a flight or travel related task or is currently in a flight related scenario, and after the user acts on the button 208, the scene service task related to the ticket can be obtained.
  • the scene service task related to the ticket may present at least at least the information of the flight departure time, arrival time, departure place, destination, airport information, flight duration, mileage, and traffic to the airport corresponding to the purchased ticket.
  • One may also selectively push the appropriate travel route, travel mode, etc. to the user in conjunction with the current location of the user.
  • it may also be associated with an existing application in the mobile phone. Users provide convenient services such as car, hotel, and destination contact information.
  • the button 208 is displayed as an icon including a weather graphic when presented.
  • the user can intuitively understand that after the current action on the button 208, the scene service task related to the weather situation can be obtained.
  • the scene service task related to the weather condition may present at least one of the current temperature, the proportion of inhalable particles, and the possible temperature change in the next period of time, and may also be combined with a mobile phone such as a wristband.
  • the user's somatosensory temperature and other parameters collected by other related devices selectively recommend the type of clothing suitable for the current weather condition to the user, and of course, may also be associated with an existing application in the mobile phone, and in the case of the user's permission, the user is Provides convenient services such as opening an air purifier in the room and air conditioning.
  • the button 208 is displayed as an icon including a cutlery graphic when presented.
  • the user can intuitively understand that the current scene action task related to the diet can be obtained after the button 208 is currently applied.
  • the diet-related scene service task may present at least one of the information about the place where the catering is provided nearby, the consumption level, the recommended dish, and the like, and may also selectively push the user appropriately according to the current location of the user.
  • Arrival at the venue may also be associated with the existing applications in the mobile phone, in the case of user permission, for users to push services such as group purchases, discounts, etc., may also be combined with the current
  • the user's sports information provides users with information such as parking lots and gas stations near the catering.
  • the button 208 is displayed as an icon including an alarm clock graphic when presented.
  • the user can intuitively understand that after the current action on the button 208, the scene service task related to the schedule and the reminder item can be obtained.
  • the scene service task related to the schedule and the reminder item may present the schedule and reminder items to the user when the time has not yet occurred. Similarly, it may also selectively push the appropriate user to the user according to the current location of the user.
  • the button 208 is displayed as an icon including a gift graphic when presented.
  • the user can intuitively understand that after the current action on the button 208, the scene service task related to shopping can be obtained.
  • the shopping-related scene service task may present at least one of the shopping website, the link of the net red product, the product that is currently popular and the user is likely to need to purchase, the preferential condition of the product that the user joins the wishlist, and the like. item.
  • the button 208 can be changed as the scene changes, with the purpose of making the user more intuitively aware of the types of scene service tasks that can be obtained after the button 208 is available.
  • the icon of the button 208 can be preset by the user.
  • the mobile phone provides a plurality of icon options for the user, and the user sets a corresponding icon for different kinds of scene service tasks in advance, so that the user can understand the current mobile phone after seeing the icon.
  • an icon used to classify an application in a platform such as an application store for a user to download and update various different functions
  • an icon recognized by most users and known as a service for distinguishing different scenes The icon of the task, so that most users can intuitively understand the content of the scene service task that the mobile phone tries to recommend to the user.
  • the presentation form of the button 207 may also change according to the current scene, and is not limited to presenting an icon representing the single AI function when a single AI function is triggered, that is, even if the user acts on A plurality of AI function entries are obtained after the button 207, but the presentation form of the button 207 can still be diversified.
  • the presentation form of the button 207 and the button 208, the timing of the change, the condition for triggering the change, and the like are not limited.
  • the foregoing scenarios include, but are not limited to, the content that is displayed by the current display interface of the mobile phone, and may include the application to which the current interface belongs, the current location of the user, the current time, the current user state, and the like, which are not limited herein.
  • the content displayed by the current display interface of the mobile phone may be identified by means such as a screen recognition; the application to which the current interface belongs may be obtained from the application attribute information or obtained through a network query; the current location of the user may be combined with the positioning function of the mobile phone, etc.
  • the way to identify; the current time can be obtained from the time of real-time change presented by the phone's clock; the current user status can be obtained in conjunction with an application in the mobile phone for monitoring the user's health status, or wearable through a wristband or the like.
  • the parameters detected by the device are determined, etc., and are not limited herein.
  • the icon of the button 208 may present the icon corresponding to the scene service task with the highest priority to the user based on the priority corresponding to the type of the scene service task.
  • the different types of priorities of the scene service tasks may be preset by the user, or may be set according to the usage habits of most users when the mobile phone is shipped from the factory, and the user may be selectively provided with the function of modifying the priority level to achieve Users provide the purpose of more personalized services.
  • the mobile phone can also present at least two types of corresponding icons to the user at the same time.
  • the presentation form includes, but is not limited to, at least two icons are displayed in an overlapping manner, or at least two icons are alternately displayed, and the like, which is not limited herein.
  • the two icons can be set to different colors with larger contrasts, or the two icons may be displayed with a certain transparency, which is not limited herein.
  • the two icons can also be partially overlapped, that is, the second half of one icon overlaps with the first half of the other icon, for example, one icon is completely displayed, and the other icon is located on the second layer, where the icon is located. Below the first layer, another icon shows the portion that is not covered by the one icon.
  • the duration of each icon before and after the alternation can be set in advance, and the duration of setting a single icon for different icons can be distinguished, or the duration of the two icons can be set to be the same.
  • the first icon is displayed for a period of time
  • the second icon is displayed for another period of time, and then the first icon is displayed, and so on, to achieve alternate display.
  • the duration of the single-page display of each icon may be set according to the priority of the scene service task corresponding to the icon, and the priority may be preset by the user according to the historical experience value or the subjective consciousness of the user, and is not limited herein.
  • the button 207 and the button 208 located in the navigation bar 204 respectively correspond to a function, that is, if the user attempts to call up the function corresponding to each of the button 207 and the button 208, it is necessary to separately perform the action on the button 207 and the action on the button 208. Operation.
  • the occupied space in the navigation bar 204 is saved.
  • the button 207 for triggering the display of the AI function entry interface and the button for triggering the display of the scene service task interface may be The 208 is integrated, that is, a button 210 for triggering the display of the AI function entry interface and the scene service task interface is set, as shown in FIG. 6(a) or FIG. 6(b).
  • a button 210 for triggering the display of the AI function entry interface and the scene service task interface is set, as shown in FIG. 6(a) or FIG. 6(b).
  • only one of the buttons 207 and 208 can be set on the navigation bar.
  • the setting of the button 210 is similar to that of the button 207 and the button 208. Reference may be made to the description of the button 207 and the button 208, which are not described herein. Similarly, the input to the button 210 is similar to the second input and the third input, and will not be described here. It should be noted that after the user acts on the button 210, the AI function entry interface and the scene service task interface can be called up. For the same reason, the button 210 may also be presented in the form of a variable icon, and the specific implementation may refer to the foregoing description, and details are not described herein.
  • the above-mentioned respective buttons can be presented in the form of small icons when set. That is, the icons of the above buttons are smaller than the icons and folders of the application presented in the system area 202, and of course, the icons of the respective application shortcuts in the Dock area. In other words, the design of the small icons is used to set the above buttons, which can effectively save space in the display interface.
  • the buttons in the navigation bar can also be displayed in the normal icon size; the size of each icon can also be inconsistent, which is not limited by the present invention.
  • the navigation bar 204 itself exists in the display interface, and each of the above buttons is disposed in the navigation bar 204, and does not occupy the display space of the display interface other than the navigation bar 204.
  • the display interface can be more fully utilized, thereby providing the user with a more convenient operation mode without occupying extra display space.
  • the user can act on the button 207 or the button 208 by clicking, double-clicking, long-pressing, etc., thereby triggering the mobile phone to display an interface corresponding to the button.
  • the user can act on the button 207 or the button 208 by clicking, double-clicking, long-pressing, etc., thereby triggering the mobile phone to display an interface corresponding to the button.
  • the AI function portal interface in order to further facilitate the user's use, it is also possible to define operations such as sliding to the left, sliding to the right, and sliding upward in the area where the button 207 is located, respectively triggering different AI functions.
  • FIG. 2(a) after the user slides to the left in the area where the button 207 is located, a floating window as shown in FIG. 3(b) is popped up, that is, the user performs a key to sweep out the sweep function; the user is at the button.
  • a floating window as shown in FIG. 4(b) is popped up, that is, the user clicks up the search function; the user slides up in the area where the button 207 is located, and pops up as shown in FIG. 5(b).
  • the floating window enables the user to call out the voice input function with one click.
  • the user can act on the button 210 by clicking, double-clicking, long-pressing, pressing, etc., thereby triggering the mobile phone to display an interface corresponding to the button, that is, an AI function portal interface and a scene service interface.
  • an interface corresponding to the button that is, an AI function portal interface and a scene service interface.
  • the floating window is popped up, that is, the user can click the AI function entrance interface by one key; the user is After the area of the button 210 is swiped to the right, the current display interface is switched to the interface corresponding to the scene service task, that is, the user can click the scene service task interface.
  • the user can choose to call up one function or two functions.
  • the user can selectively call up different functions with different operations. It should be noted that the above operation mode is a possible example and is not intended to limit the embodiments of the present invention.
  • buttons other than the navigation keys 206 may not be provided in the navigation bar 204, but after the user acts on the navigation bar 204, for example, sliding upwards, Interface switching to display AI function entry and scene service tasks.
  • the up or down sliding operation performed by the paging mark 205 as the starting point of the sliding operation is regarded as the triggering manner of calling the mobile phone to display the AI function entry and/or the scene service task, and the like.
  • the paging starts the starting point of the 205-bit sliding operation.
  • the floating window pops up to display the AI function entry; if the user performs the downward sliding operation, the current display interface is switched to the interface corresponding to the scene service task.
  • the scene service task is called up; if the user performs a long press operation, the current display interface is switched to the interface corresponding to the scene service task, and a floating window is displayed at the top of the interface to display the AI function entry interface, or after the interface switching is implemented,
  • the AI function portal interface and the scene service task interface are simultaneously displayed in the currently displayed interface.
  • the following describes the displayed AI function entry and/or scene service task in combination with a specific application scenario.
  • buttons other than the navigation key 206 that is, the button 207 and the button 208 are provided in the navigation bar 204.
  • the corresponding functions are the same and will not be described again.
  • the content currently displayed by the mobile phone is news
  • the mobile phone presents the user with an interface corresponding to the button 207.
  • a floating window 209 is popped up on the current news interface.
  • the floating window 209 partially covers the current news interface for presenting the AI function entry to the user.
  • the AI function includes at least one of a voice input function, a sweep function, a search function, a screen recognition function, a shortcut of an application function, and an applet.
  • the floating window 209 shown in FIG. 8(a) includes an area where the large card 211 is located, an area where the small card 212 is located, and an area 213 where the fixed AI function entry is located.
  • the one or more large cards 211 can present the recognition result obtained by the screen recognition function to the current display interface, and the content associated with the recognition result; one or more small cards 212 can be presented to the user.
  • the shortcut of the application in which the recognition result is associated, or the shortcut of the application function, or the applet, is not limited herein.
  • the shortcut button 214 of the sweep function, the shortcut button 215 of the search function, and the shortcut button 216 of the voice input function may be included.
  • the user can trigger the recognition of the two-dimensional code, the barcode or the like by clicking the button 214; triggering the search for the text, the picture, etc. by clicking the button 215; and opening the entry of the voice password input by clicking the button 216.
  • the above-mentioned screen recognition function refers to the content that can be recognized by the current display interface by means of screen recognition.
  • the keywords, keywords, etc. existing in the current display interface can be identified through semantic extraction, and then passed through, for example, tag matching.
  • the method of finding an application corresponding to a keyword, a keyword, an application function, or a link, and generating a card in combination with the above content is presented to the user.
  • Label matching refers to the correspondence between keywords, keywords and existing applications, or the corresponding relationship with existing application functions, or the search for keywords and keywords through web search. Links and the like corresponding to the related content are not limited herein.
  • the foregoing various correspondences may be determined by the user in advance, or may be determined by combining the matching relationship stored in the database and the central control station, and are not limited herein.
  • the obtained information can be sent to each application, and each application can determine whether the matching relationship with keywords, keywords, etc. is satisfied. If the matching relationship is satisfied, the application can automatically push itself to the mobile phone, and the mobile phone generates a shortcut of the application and presents it.
  • application functions, applets, etc. can also adopt the same implementation manner, which is not limited herein.
  • the network search can also be automatically performed, and the search result is presented in the form of a keyword/title plus a link.
  • the current news is about an unmanned technology.
  • the mobile phone can automatically display other searched news about unmanned driving, related technical documents, pictures, reports, etc. in the form of cards or links on the interface for the user to call. .
  • the large card 211 can present the user with a link of the news presented by the current display interface, and the user can save the link to the preset location by clicking the big card 211.
  • the preset location may be the current location.
  • Some or all of the floating interface can also be a negative screen (HIBOARD) or other location of the mobile phone. That is, the card is saved to the negative screen of the mobile phone, and then the user can open the content corresponding to the link in the interface corresponding to the negative screen to continue browsing.
  • the negative one screen can be regarded as an interface of a multifunctional collection convenient for the user to operate, so that the user can obtain the corresponding service and content without opening the application.
  • the large card 211 selectively presents the content of the content, the abstract, and the like of the content described in the large card 211 to the user, which is not limited herein.
  • the user can also open the application, application function, applet or link corresponding to the big card 211 directly by clicking, double clicking, long pressing or other operations to view related content.
  • the small card 212 can present to the user a shortcut or applet of the application function associated with the current screen recognition result.
  • the user can click on the player small card to open the player shortcut to search for the video related to the current news; or click the chat shortcut to share the current news content, Explore, etc.; or click on the notepad shortcut to record important content in the current news.
  • the shortcuts of applications such as players, chats, and notepads are highly correlated with the news, so such small cards are generated and pushed to the user.
  • the size of the correlation may be preset by the user, for example, setting a certain keyword or a certain type of keyword, a shortcut of the application corresponding to the keyword, and the like, which is not limited herein.
  • a shortcut or applet When a shortcut or applet is opened in a small card, its content can be displayed on the current interface without a jump interface, or it can be jumped to the interface corresponding to the opened application.
  • the user can set or modify the open mode.
  • the user can slide left and right or up and down in the floating window 209 by sliding operation to selectively Some or all of the contents of the floating window 209 are presented. For example, the user slides in the sliding direction as shown in FIG. 8(a) to obtain the content as shown in FIG. 8(b).
  • the button for triggering the above AI function The display may be fixedly displayed in the floating window, that is, the positions of the button 214, the button 215, and the button 216 may not change as the scene changes once determined. That is, when the user slides in the floating window 209, the positions of the above three buttons do not change.
  • the sliding operation can cause the big card 211 and the small card 212 to be simultaneously slid.
  • the area where the big card 211 and the small card 212 are located so that the content presented by the floating window 209 is divided into a plurality of display windows, and the user can operate each display window separately.
  • the sliding operation of the user in the area where the big card 211 is located is used to control the left and right sliding of the big card 211; and the sliding operation of the user in the area where the small card 212 is located is used to control the left and right sliding of the small card 212.
  • the above exemplified case is a possible implementation manner, and is not intended to limit the embodiments of the present invention.
  • the sweep function, the search function, and the voice input function can be called up after the button 207 is applied, without the user having to perform multiple operations to find the position of each of the above buttons.
  • the trigger buttons of the above AI functions can be conveniently called up to enable the user to implement the above functions.
  • the floating window 209 can present the user with other content that is pushed to the user based on the scene.
  • the content of the AI function entry interface may be related to the application, or related to the content presented by the current display interface, or related to the content presented by the application and the current display interface, and is not limited herein. .
  • the user can push the appropriate content based on the current information interaction interface, that is, the content presented by the chat interface.
  • the current information interaction interface that is, the content presented by the chat interface.
  • contents such as text, picture information, voice, etc.
  • the mobile phone can recognize the place recorded in the text or the place corresponding to the scene presented in the picture information through the screen recognition function, or pass
  • the method of voice recognition extracts related information such as the name of the place from the voice.
  • the mobile phone time limit searches for the location, thereby determining content corresponding to the location, such as the location of the location, the mode of transportation to the venue, the consumption level of the venue, and the like. Then, based on the determined content, an application, an application function, and the like that match the determined content are selected, thereby pushing a shortcut of the application having the group purchase function, a shortcut of the application having the car function, and the like.
  • the mobile phone directly completes the recognition screen based on the content presented by the current display interface, so as to implement content search and Push.
  • the mobile phone can also complete the above operations in combination with the application and the content presented by the current interface, and details are not described herein.
  • the first recommendation information in response to receiving a preset operation of the first button on the navigation bar of the first application interface, the first recommendation information may also be displayed in an input box of the first application interface.
  • the first recommendation information is determined by the AI according to one or more display objects on the first application interface.
  • the display object is at least one of text, voice, or image information.
  • the mobile phone can implement processing operations such as semantic analysis based on the content presented in the chat interface, for example, the context of the conversation content, thereby recommending to the user that the user may use the First recommendation information.
  • the first recommendation information may be content that the user desires to input into the input box to reply to the user at the opposite end of the chat interface. In this way, the user can directly select the content desired to be input into the input box from the first recommendation information, and save the operation of inputting text, voice, and the like in order to reply to the opposite user.
  • the manner of presenting the first recommended information for the user to select is effective for recommending chat content, replying content, and the like for the user, thereby solving the problem.
  • the first recommendation information may be displayed on the first application interface, for example, as shown in FIG. 8( a ) and FIG. 8( b ), Modifying the interface of the first application interface and displaying the first recommendation information on the modified first application interface.
  • the interface of the first application interface is scaled, and the scaled content is displayed above the current display interface, and then the first recommendation information is displayed below the current display interface.
  • a part of the interface of the first application interface is displayed above the current display interface, and then the first recommendation information is displayed below the current display interface.
  • the interface for modifying the first application interface includes, but is not limited to, selecting a part of the content in the first application interface, or adjusting part of the content in the first application interface.
  • the first recommendation information and the video being played may be displayed in a split screen. That is, the video window currently being played is zoomed, and when presented, occupies most of the area of the current display interface, and for a small portion of the remaining current display interface, can be used to display the first recommendation information.
  • the mobile phone can determine the type, name and other information of the current video through the screen recognition function, and then based on the information, push the related content of the video, that is, the first recommendation information.
  • the mobile phone can push a shortcut of the application with the ticket purchasing function to the user, and the theater information of the movie has been released, for example, the location of the movie theater, the fare, The playing time of the movie, etc.
  • the user can directly click on the shortcut to achieve the choice of the theater and complete the ticket purchase operation.
  • the above pushing manner can also facilitate the user to know information about the movie corresponding to the currently viewed video, such as movie review information.
  • the relevant content that is, the first recommendation information
  • the first recommendation information may be displayed on the current application interface.
  • the content in the preview screen includes, but is not limited to, at least one of a text, a scene, a food, and a task.
  • the mobile phone can push the historical information related to the Great Wall to the user, for example, the origin of the Great Wall, the establishment time, and the like.
  • the Great Wall is one of the historical sites, you can also push relevant information such as the Ming Tombs and other famous attractions for users to check.
  • the mobile phone recognizes that it is currently in the stage of previewing the screen, and then it can be considered that the user currently has a shooting tendency, but has not completed the shooting.
  • the user can be provided with a photography skill prompt.
  • the content of the photography skill prompt includes, but is not limited to, at least one of a position, a posture of the photographed person, a depth of field at the time of framing, a time of pressing the shutter, a selected filter mode, and the like.
  • the filter mode includes, but is not limited to, one of a portrait mode, a macro mode, and a motion mode.
  • the information displayed above may be obtained from the content saved/collected by the user, or may be obtained from the network.
  • information related to an object in the current framing interface can be preferentially obtained from the content saved/collected by the user and displayed.
  • the related information is obtained by searching from the network. It can also be obtained from the content saved/collected/browsed by the user and also obtained from the network, and then all presented in the display interface, or the user-specified information is presented according to the user's choice.
  • the content saved/collected/browsed by the user may be saved in the current terminal, that is, the content stored locally in the mobile phone operated by the current user, or the content saved in another terminal under the same account, or may be saved in the cloud.
  • the content and the like are not limited here.
  • the content browsed by the user may be stored in each terminal, or may be content stored in each server. It can be a user's online history and the like.
  • the server belongs to one of the electronic devices mentioned herein.
  • the relevant content can be pushed for the user based on the currently presented game interface.
  • the currently displayed interface for a character is displayed.
  • the mobile phone can recognize the information of the character by means of a screen, and then search for the operation mode of the character from the network side, the state that other players are equipped for the character, and the match of the character adaptation.
  • the lineup and other content it is pushed to the user, and may be presented in the form of a big card or a small card.
  • the user can also push the operation video of the great god player for the user to view.
  • the author may push the creative background of the music, a music list similar to the music style, other works of the music singer, and the like.
  • the user can be pushed to several applications that process the picture better, identify the shooting location of the picture to provide the user with information about the shooting location, and the like.
  • the user may push the shortcut of other navigation software installed in the mobile phone.
  • the first recommendation information is a network address link
  • the content presented to the user after the user invokes the AI function entry interface includes a network address link.
  • the mobile phone displays the network on the current display interface, that is, on the first application interface, in response to the preset operation of the user to the network address link.
  • the content that the address link points to.
  • the AI function portal can generate various variations, thereby providing users with better service.
  • buttons other than the navigation key 206 that is, the button 207 and the button 208 are provided in the navigation bar 204. If the user acts on button 208, the handset presents the user with an interface corresponding to button 208, such as shown in FIG.
  • the location information 217 is a current location of the user determined by the mobile phone based on the current scene, such as a positioning function, for example, the location information may be “close to the office area”.
  • the contents presented by the mobile phone to the user include, but are not limited to, office-related content, and content that the user normally accesses when approaching the office area within a time range of 12:50. For example, check-in cards, news cards, and meeting schedule cards.
  • the user For the user, during the lunch break, that is, the time range of 12:50, the user usually accesses the news application, views the meeting schedule, and when the office area is near, the user usually signs in. Therefore, in the embodiment of the present invention, the content or the application/applet is pushed to the user based on the location and time of the user and the behavior of the daily user.
  • the applet is a special application that can be used without downloading and installing. Users can open the application by swiping or searching. Users don't have to worry about whether to install too many applications. The application will be ubiquitous and ready to use, but there is no need to install and uninstall.
  • check-in card Take the check-in card as an example.
  • the user needs to open the application with the check-in function by clicking and the like to complete the check-in operation.
  • the scene service task interface there is a check-in card, and the user can implement the check-in by clicking or the like on the check-in card.
  • the implementation of the above check-in card can be regarded as an address link, that is, the current user's click operation is performed on the check-in card, but can be directly linked to the application with the check-in function, so as to implement the user's execution on the check-in card.
  • a click operation is equivalent to a check-in operation performed after a user opens an application with a check-in function.
  • the mobile phone can selectively present a suggestive small card to the user, for example, if the current user has a ticket that is not traveling, or the current conference schedule of the user may be inconsistent with the current location of the city, the mobile phone may The user recommends a travel card.
  • the role of the travel card may be to provide users with services such as purchasing a ticket, selecting a seat, and the like.
  • the mobile phone can push the email small card to the user, and the email small card can be used for real-time checking by the user. , reply to emails, etc.
  • the content presented to the user is a possible example and is not intended to limit the embodiments of the present invention.
  • the scene service task may be updated.
  • the scene service task is replaced with the scene service task that matches the event corresponding to the preset time range.
  • the scene service task can be updated according to the time point or time period. For example, the user usually reads the news between 8 am and 10 am, then between 8 and 10 Inside, the shortcuts for the application for reviewing the news can be pushed to the scene service task.
  • the scenario service task is replaced with the scenario service task that matches the event corresponding to the preset location range.
  • the scene service task is pushed to the user based on the location of the user, such as the schedule, the reminder, and the like, based on the current location of the user. For example, if there is a ticket that is not traveling, and the schedule shows that the ticket is for today's travel, then the scene service task can provide the user with the navigation route, the time required for the journey, etc. based on the current location of the user and the location of the airport, for the user's reference. .
  • the scene service task is replaced with the scene service task that matches the preset motion state. For example, if the mobile phone recognizes that the current driving speed of the user is in the driving speed range, the mobile phone can consider that the user is currently in the driving state. If the preset motion state includes the driving state, after the mobile phone confirms that the user is in the driving state, the driving related information, such as the current driving speed, the remaining fuel amount, and the like, may be pushed to the user. Optionally, the road condition information and the like of the current route are pushed to the user, which is not limited herein.
  • the type of the scene service task may be one or more, which is not limited herein.
  • a shortcut of the third application is displayed on the first preset location of the scene service task interface, in response to receiving the user's access to the third application.
  • the preset operation of the shortcut displays the interface corresponding to the third application on the scene service task interface; at the second time, the shortcut of the fourth application is displayed on the first preset position of the scene service task interface, in response to Receiving a preset operation of the shortcut of the fourth application by the user, and displaying an interface corresponding to the fourth application on the scene service task interface.
  • the third application and the fourth application are determined by the electronic device according to the user usage habit; the first time is different from the second time, and the third application is different from the fourth application.
  • the update of the scenario service task may provide the user with content adapted to the current scenario.
  • the second button that is, the presentation form of the button 208 may also be changed. For example, at a first time, the content corresponding to the third application is displayed on the second button; and at the second time, the content corresponding to the fourth application is displayed on the second button.
  • the scenario service task may not be updated in real time, that is, the scenario service task may be updated periodically or periodically.
  • the user may set a time point for updating the scenario service task in advance;
  • the scene service task is updated after the current operation of the user meets the preset trigger condition. For example, if the user accesses an application more than a preset number of times in a period of time, it is considered that the user may need to access the application multiple times in the near future.
  • the shortcut of the application may be set in the scene service task.
  • the mobile phone can update the scene service task and the like every time the user lights the screen and/or unlocks the screen, which is not limited herein.
  • FIG. 6(a), FIG. 6(b) or FIG. 7 when the display interface is switched to display interface including the AI function entry interface and the scene service task interface, it may be presented as shown in the figure.
  • the display interface can also be set. Scroll bar 218. The user can browse the scene service task by sliding the scroll bar 218.
  • a scroll bar for controlling the entire display interface may be set; or a scroll bar for controlling the scene service task and a scroll for controlling the AI function entry may be separately set based on two different functions.
  • the default user's sliding operation is an operation mode in which the control interface turns pages or moves up and down and left and right.
  • either the button 207, the button 208, or the button 210 can be selectively turned “on” or “off” for the functions of the buttons described above.
  • setting options regarding the navigation bar are included.
  • the user can open the setting interface of the navigation bar by clicking, as shown in Fig. 11(b).
  • the user can selectively open one or more of the AI function portal and the scene service task.
  • the user can also choose not to enable the above two functions.
  • the button 207 is presented, so that after the user acts on the button 207, the user is presented with a floating window to enable the user to trigger various AI functions.
  • the function of the scene service task can also be opened in the navigation bar setting interface, and the operation mode is similar to the opening of the AI function, and will not be described here.
  • the manner for the user to selectively open the button of the AI function portal interface and the scene service task interface is not limited to the above-mentioned operation mode, and the user can complete the setting operation through other interfaces.
  • the mobile phone is shipped from the factory, the default button 207, the button 208 and the navigation key 206 are simultaneously presented, or the button 210 and the navigation key 206 are simultaneously presented, which is not limited herein.
  • the user in the case of determining to enable the AI function entry, the user can also select whether to enable or not for the basic AI function.
  • the basic AI function includes, but is not limited to, one or more of the above search function, the sweep function, and the voice input function. For example, if the user turns off the sweep function, as shown in FIG. 10, the button 214 does not exist in the area 213 where the fixed AI function entry is located.
  • a control device may be provided, and the control device includes a hardware structure and/or a software module corresponding to each function in order to implement the above functions.
  • the present invention can be implemented in a combination of hardware or hardware and computer software in combination with the elements and algorithm steps of the various examples described in the embodiments disclosed herein. Whether a function is implemented in hardware or computer software to drive hardware depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
  • Each of the control devices involved in the embodiments of the present invention is used to implement the method in the foregoing method embodiments.
  • the embodiment of the present invention may divide the function module by the control device according to the foregoing method example.
  • each function module may be divided according to each function, or two or more functions may be integrated into one processing module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules. It should be noted that the division of the module in the embodiment of the present invention is schematic, and is only a logical function division, and the actual implementation may have another division manner.
  • the control device 30 includes a display module 31, a receiving module 32, and a processing module 33.
  • the display module 31 is configured to support the control device 30 to implement the display of the first interface, the AI function entry interface, the service scene task interface, and the non-navigation buttons such as the first button and the second button involved in the embodiment of the present invention.
  • the receiving module 32 is configured to support the control device 30 to receive the first input, the second input, the third input, etc., and may also be an input operation of the user acting on any content presented on the display interface, or It is an input operation or the like that the user acts on the hard keys;
  • the processing module 33 is used to support the control device 30 to perform operations such as semantic analysis, keyword extraction, etc. on the content presented in the display interface, and/or for the techniques described herein. Other processes.
  • control device 30 further includes: a communication module 34, configured to support data interaction between the control device 30 and each module in the terminal, and/or support communication between the terminal and other devices such as a server;
  • the storage module 35 is used to support the control device 30 to store the program code and data of the terminal.
  • the processing module 33 can be implemented as a processor or a controller, for example, a central processing unit (CPU), a general-purpose processor, a digital signal processor (DSP), and an application specific integrated circuit (Application- Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA) or other programmable logic device, transistor logic device, hardware component, or any combination thereof. It is possible to implement or carry out the various illustrative logical blocks, modules and circuits described in connection with the present disclosure.
  • the processor may also be a combination of computing functions, for example, including one or more microprocessor combinations, a combination of a DSP and a microprocessor, and the like.
  • the communication module 34 can be implemented as a transceiver, a transceiver circuit, or a communication interface or the like.
  • the storage module 35 can be implemented as a memory.
  • the terminal 40 includes: a processor 41, Transceiver 42, memory 43, display 44, and bus 45.
  • the processor 41, the transceiver 42, the memory 43, and the display 44 are connected to each other through a bus 45.
  • the bus 45 may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture). EISA) bus and so on.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the bus can be divided into an address bus, a data bus, a control bus, and the like. For ease of representation, only one thick line is shown in FIG. 13, but it does not mean that there is only one bus or one type of bus.
  • the steps of a method or algorithm described in connection with the present disclosure may be implemented in a hardware, or may be implemented by a processor executing software instructions.
  • the software instructions may be composed of corresponding software modules, which may be stored in a random access memory (RAM), a flash memory, a read only memory (ROM), an erasable programmable read only memory ( Erasable Programmable ROM (EPROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Register, Hard Disk, Mobile Hard Disk, Compact Disc Read-Only Memory (CD-ROM), or any of those well known in the art.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • register Hard Disk
  • Mobile Hard Disk Mobile Hard Disk
  • CD-ROM Compact Disc Read-Only Memory
  • Other forms of storage media are coupled to the processor to enable the processor to read information from, and write information to, the storage medium.
  • the embodiment of the present invention provides a chip, a module, or a device, which is used to implement the method in the foregoing method embodiment, and specifically displays a display, a processor, and an input device connected to the control device to perform a control method according to an embodiment of the present invention.
  • the embodiment of the present invention provides a readable storage medium, where the readable storage medium stores instructions, and when the instruction is run on the terminal, causes the terminal to execute any one of the foregoing method embodiments.
  • the embodiment of the present invention provides a computer program product, where the computer program product includes software code, and the software code is used to execute any one of the foregoing method embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)

Abstract

一种控制方法及装置,涉及终端技术领域,能够解决诸如语音输入功能等人工智能功能,在调用过程中操作过于繁琐的问题。该控制方法包括:显示第一界面;接收用户作用于一个非导航按钮(207,208)的第一输入;响应于第一输入,显示与非导航按钮(207,208)对应的AI功能入口界面和场景服务任务界面中的至少一项。其中,第一界面中包含导航栏(204),导航栏(204)设置有导航键(206)和至少一个非导航按钮(207,208),其中,导航键(206)用于在被触发时电子设备执行返回上一界面、跳转至主界面和调出截止当前时刻为止的预设时间内访问的应用程序的界面中的至少一项,至少一个非导航按钮(207,208)用于在被触发时电子设备执行显示人工智能AI功能入口界面和场景服务任务界面中的至少一项。

Description

一种控制方法及装置 技术领域
本申请涉及终端技术领域,尤其涉及一种控制方法及装置。
背景技术
随着终端技术的发展,尤其是人工智能(Artificial Intelligence,AI)技术的发展,用户对于诸如语音输入功能等人工智能功能的需求越来越高。目前,许多终端都能够实现语音输入功能。以手机为例,对于用户而言,可以在手机主界面找到设置图标,之后通过点击该设置图标的方式进入设置界面,并在设置界面中查找到语音输入功能的控制开关,以后开启语音输入功能。随后用户就可以通过输入指定的语音口令,调用诸如拨打电话等功能。为了降低手机对语音口令的误识别,用户在完成语音输入功能的使用后,可以采用同样的操作方式关闭语音输入功能。
采用上述实现方式,虽然能够有效实现人工智能,但对于用户而言,逐步的查找、点击操作会降低用户体验,且对于对上述操作不熟悉的用户而言,上述操作过于繁琐,不易掌握。
发明内容
本发明实施例提供一种控制方法及装置,能够解决诸如语音输入功能等人工智能功能,在调用过程中操作过于繁琐的问题。
第一方面,本发明实施例提供一种控制方法,由电子设备执行。该方法包括:显示第一界面;接收用户作用于一个非导航按钮的第一输入;响应于第一输入,显示与非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项。其中,第一界面中包含导航栏,导航栏设置有导航键和至少一个非导航按钮,其中,导航键用于在被触发时电子设备执行返回上一界面、跳转至主界面和调出截止当前时刻为止的预设时间内访问的应用程序的界面中的至少一项,至少一个非导航按钮用于在被触发时电子设备执行显示AI功能入口界面和场景服务任务界面中的至少一项。相比较于现有技术中需要用户执行多步操作才能调用某一特定的AI功能的实现方案,本发明实施例中,由于导航栏中设置有非导航按钮,用户可以通过作用于非导航按钮以触发显示AI功能入口界面和/或场景服务任务界面。这样,用户在主界面、应用运行界面等诸多界面中,利用导航栏的全局显示功能,通常可以实现在任意应用场景中作用于非导航按钮,以降低AI功能入口界面或是场景服务任务界面的调用难度,从而解决诸如语音输入功能等AI功能,在调用过程中操作过于繁琐的问题。
在一种可能的实现方式中,至少一个非导航按钮为一个按钮。那么响应于第一输入,显示与非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项,可以实现为响应于第一输入,显示与非导航按钮对应的AI功能入口界面和场景服务任务界面。由此可见,用户可以通过对上述一个按钮执行不同的操作,以达到分别显示AI功能入口界面和场景服务任务界面的效果;或者,用户可以通过对上述一个按钮执行 操作,以达到同时显示AI功能入口界面和场景服务任务界面的效果。也就意味着,单个非导航按钮的设置,允许用户在同一时刻调用上述两个界面,或是在不同时刻通过作用于该单个非导航按钮的不同操作,调用出不同的界面。需要说明的是,单个非导航按钮的设置,在能够实现调用AI功能入口界面和/或场景服务任务界面的情况下,进一步节省了导航栏中的空间。
在一种可能的实现方式中,至少一个非导航按钮为两个按钮。那么接收用户作用于非导航按钮的第一输入;响应于第一输入,显示与非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项,可以实现为:接收用户作用于第一按钮的第二输入,响应于第二输入,显示与第一按钮对应的AI功能入口界面;接收用户作用于第二按钮的第三输入,响应于第三输入,显示与第二按钮对应的场景服务任务界面。其中,第二输入与第三输入可以相同或是不同,比如,上述第二输入和第三输入包括但不限于单击、双击、长按,向左滑动、向右滑动、压力以及悬浮等操作中的一项。在本发明实施例中,设置两个非导航按钮的目的在于,使用户作用于不同的非导航按钮时能够触发不同的界面显示。
在一种可能的实现方式中,响应于第二输入,显示与第一按钮对应的AI功能入口界面,可以实现为:响应于第二输入,AI功能入口界面悬浮显示在第一界面上。由此可见,AI功能入口界面的显示,可以采用悬浮显示的方式,比如,在第一界面上弹出悬浮窗口。这样对于用户而言,不会改变当前正在显示的第一界面的布局,而是覆盖在第一界面之上,向用户呈现AI功能入口界面,以方便用户调用AI功能。并且,考虑到AI功能入口界面中的内容,往往是基于第一界面中显示的内容选择性推荐给用户的,因此,为了方便用户边查看第一界面边浏览AI功能入口界面中呈现的内容,上述悬浮显示的方式更易于用户操作。比如,用户可以动态调整悬浮的AI功能入口界面的大小、位置,甚至调整该界面在呈现过程中的透明度等,在此不予限定。
在一种可能的实现方式中,响应于第三输入,显示与第二按钮对应的场景服务任务界面,可以实现为:响应于第三输入,将第一界面切换显示为场景服务任务界面。考虑到场景服务任务界面中呈现的内容往往较多,即向用户推荐的场景服务任务较多,因此,为了保证显示内容的清晰度,在本发明实施例中,可以采用界面切换的方式,将当前呈现给用户的第一界面切换为场景服务任务界面,供用户访问。
在一种可能的实现方式中,第一界面为第一应用界面。那么响应于第二输入,显示与第一按钮对应的AI功能入口界面,可以实现为:响应于接收到用户对第一应用界面的导航栏上的第一按钮的预设操作,在第一应用界面上显示第一推荐信息,第一推荐信息为AI根据第一应用界面上显示的一个或多个显示对象确定的,其中,显示对象为文字、语音或图像信息中的至少一项。
在一种可能的实现方式中,在第一应用界面上显示第一推荐信息具体为以下情况中的至少一种:在第一应用界面的输入框中显示第一推荐信息;在第一应用界面上悬浮显示第一推荐信息;修改第一应用界面的界面并在修改后的第一应用界面上显示第一推荐信息。以诸如聊天界面等信息回复、信息互动界面为例,在第一应用界面的输入框中显示第一推荐信息,可以有效节省用户回复消息时编辑回复内容的时间。比如,手机可以通过诸如语义分析等处理方式,提取当前显示内容中的一个或是多个关键字, 之后结合提取到的关键字与已有数据库中的内容进行匹配,向用户选择性地推荐用户可能期望回复给对端用户的文字、语音及图片中的一项。这样可以节省用户编辑回复内容所耗费的时间。另外,对于手机这种输入键盘较小的设备而言,降低了用户使用输入键盘的频率。此外,对于对端用户而言,由于节省了编辑回复内容的用户的编辑时间,因此,也节省了对端用户的等待时间,即对于对端用户而言,在消息发出后很快就能得到回复。
在一种可能的实现方式中,第一推荐信息为网络地址链接,文字,图片或表情中的至少一种。也就意味着,以上述聊天界面为例,手机能够向用户推送多种多样的推荐信息,供用户直接实现消息回复。
在一种可能的实现方式中,第一推荐信息为网络地址链接,在第一应用界面上显示第一推荐信息之后,上述方法进一步包括:响应于用户对网络地址链接的预设操作,在第一应用界面上显示网络地址链接指向的内容。对于用户而言,可以通过对网络地址链接执行预设操作,就能够使当前界面中呈现出该网络地址链接指向的内容。以用户需要实现内容搜索为例,这样快捷的提示方式,省去了用户退出当前显示界面,进入具有搜索功能的应用程序,再实现搜索的繁琐操作,带给用户更加便捷地操作体验。
在一种可能的实现方式中,第一应用界面为取景界面。那么第一推荐信息为显示在第一应用界面上的一个或多个显示对象对应的信息,该显示对象为图像信息。比如,用户使用手机对周围环境进行拍摄,在拍摄图像、视频的预览过程中,手机可以自动识别出当前的拍摄过程为取景界面,此时取景界面中呈现的显示对象,可以供手机确定第一推荐信息。那么手机可以通过识屏等功能,对显示对象进行识别,并基于识别结果完成与识别结果相关的搜索、推送等功能。
在一种可能的实现方式中,AI功能入口界面还包括语音、图像和文字搜索,以及保存功能按钮中的至少一项。
在一种可能的实现方式中,响应于第一输入,显示与非导航按钮对应的AI功能入口界面,可以实现为:响应于第一输入,对第一界面上的内容进行语义分析,提取一个或多个关键字,显示包含有特定信息的AI功能入口界面。其中,特定信息为与提取的关键字对应的信息。
在一种可能的实现方式中,场景服务任务界面,包括:在第一时间,在场景服务任务界面的第一预设位置上显示第三应用程序的快捷方式,响应于接收到用户对第三应用程序的快捷方式的预设操作,在场景服务任务界面上显示第三应用程序对应的界面;在第二时间,在场景服务任务界面的第一预设位置上显示第四应用程序的快捷方式,响应于接收到用户对第四应用程序的快捷方式的预设操作,在场景服务任务界面上显示第四应用程序对应的界面。其中,第三应用程序和第四应用程序是电子设备根据用户使用习惯确定的;第一时间不同于第二时间,第三应用程序不同于第四应用程序。由此可见,在不同的时间点,场景服务任务界面可能根据场景的不同发生或多或少的变化。当然,场景服务任务界面更新的前提包括但不限于时间的变化,还可以为设备所处位置发生变化、提醒事项的变化等,在此不予限定。
在一种可能的实现方式中,对于场景服务任务界面的触发按钮,即第二按钮,在第一时间,第二按钮上显示与第三应用程序对应的内容;在第二时间,第二按钮上显 示与第四应用程序对应的内容。也就意味着,随着场景服务任务界面的变化,第二按钮也会随之发生改变,从而更有效地向用户提示当前场景服务任务界面所呈现的内容。
在一种可能的实现方式中,第一界面为主界面,第一界面还包括停靠Dock区,Dock区用于放置应用程序的快捷方式。也就意味着,导航栏与Dock区属于分设于显示界面不同位置的两个功能区域。在本发明实施例中,相比较于Dock区而言,导航栏具有全局显示功能。
第二方面,本发明实施例提供一种控制装置。该装置可以实现上述方法实施例中所实现的功能,所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个上述功能相应的模块。
第三方面,本发明实施例提供一种终端。该终端的结构中包括显示屏,存储器,一个或多个处理器,多个应用程序,以及一个或多个程序;其中,所述一个或多个程序被存储在所述存储器中;所述一个或多个处理器在执行所述一个或多个程序时,使得该终端实现第一方面及其各种可能的设计中任意一项所述的方法。
第四方面,本发明实施例提供一种可读存储介质,包括指令。当该指令在终端上运行时,使得该终端执行上述第一方面及其各种可能的设计中任意一项所述的方法。
第五方面,本发明实施例提供一种计算机程序产品,该计算机程序产品包括软件代码,该软件代码用于执行上述第一方面及其各种可能的设计中任意一项所述的方法。
第六方面,本发明实施例提供一种图形或用户界面,用于执行上述第一方面及其各种可能的设计中任意一项所述的方法。
附图说明
图1为本发明实施例提供的第一种终端的结构示意图;
图2(a)为本发明实施例提供的第一种显示界面的示意图;
图2(b)为本发明实施例提供的第二种显示界面的示意图;
图3(a)为本发明实施例提供的第一种导航栏的示意图;
图3(b)为本发明实施例提供的第三种显示界面的示意图;
图4(a)为本发明实施例提供的第二种导航栏的示意图;
图4(b)为本发明实施例提供的第四种显示界面的示意图;
图5(a)为本发明实施例提供的第三种导航栏的示意图;
图5(b)为本发明实施例提供的第五种显示界面的示意图;
图6(a)为本发明实施例提供的第六种显示界面的示意图;
图6(b)为本发明实施例提供的第七种显示界面的示意图;
图7为本发明实施例提供的第八种显示界面的示意图;
图8(a)为本发明实施例提供的第五种显示界面的示意图;
图8(b)为本发明实施例提供的第六种显示界面的示意图;
图9为本发明实施例提供的第七种显示界面的示意图;
图10为本发明实施例提供的第八种显示界面的示意图;
图11(a)为本发明实施例提供的第九种显示界面的示意图;
图11(b)为本发明实施例提供的第十种显示界面的示意图;
图12为本发明实施例提供的一种控制装置的结构示意图;
图13为本发明实施例提供的第二种终端的结构示意图。
附图标记说明:
201-状态栏;
202-系统区;
203-Dock区;
204-导航栏;
205-分页标记;
206-导航键;
207-用于触发显示AI功能入口界面的按钮;
208-用于触发显示场景服务任务界面的按钮;
209-悬浮窗口;
210-用于触发显示AI功能入口界面和场景服务任务界面的按钮;
211-大卡片;
212-小卡片;
213-固定AI功能入口所在区域;
214-扫一扫功能的快捷按钮;
215-搜索功能的快捷按钮;
216-语音输入功能的快捷按钮;
217-位置信息;
218-滚动条。
具体实施方式
本发明实施例可以用于一种终端(即电子设备),该终端可以为笔记本电脑、智能手机、虚拟现实(Virtual Reality,VR)设备、增强现实技术(Augmented Reality,AR)、车载设备或智能可穿戴设备等设备。该终端可以至少设置有显示屏、输入设备和处理器,以终端100为例,如图1所示,该终端100中包括处理器101、存储器102、摄像头103、RF电路104、音频电路105、扬声器106、话筒107、输入设备108、其他输入设备109、显示屏110、触控面板111、显示面板112、输出设备113、以及电源114等部件。其中,显示屏110至少由作为输入设备的触控面板111和作为输出设备的显示面板112组成。需要说明的是,图1中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置,在此不做限定。
下面结合图1对终端100的各个构成部件进行具体的介绍:
射频(Radio Frequency,RF)电路104可用于收发信息或通话过程中,信号的接收和发送,比如,若该终端100为手机,那么该终端100可以通过RF电路104,将基站发送的下行信息接收后,传送给处理器101处理;另外,将涉及上行的数据发送给基站。通常,RF电路包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路104还可以通过无线通信与网络和其他设备通信。该无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通 用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器102可用于存储软件程序以及模块,处理器101通过运行存储在存储器102的软件程序以及模块,从而执行终端100的各种功能应用以及数据处理。存储器102可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如,声音播放功能、图象播放功能等)等;存储数据区可存储根据终端100的使用所创建的数据(比如,音频数据、视频数据等)等。此外,存储器102可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
其他输入设备109可用于接收输入的数字或字符信息,以及产生与终端100的用户设置以及功能控制有关的键信号输入。具体地,其他输入设备109可包括但不限于物理键盘、功能键(比如,音量控制按键、开关按键等)、轨迹球、鼠标、操作杆、光鼠(光鼠是不显示可视输出的触摸敏感表面,或者是由触摸屏形成的触摸敏感表面的延伸)等中的一种或多种。其他输入设备109还可以包括终端100内置的传感器,比如,重力传感器、加速度传感器等,终端100还可以将传感器所检测到的参数作为输入数据。
显示屏110可用于显示由用户输入的信息或提供给用户的信息以及终端100的各种菜单,还可以接受用户输入。此外,显示面板112可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板112;触控面板111,也称为触摸屏、触敏屏等,可收集用户在其上或附近的接触或者非接触操作(比如,用户使用手指、触笔等任何适合的物体或附件在触控面板111上或在触控面板111附近的操作,也可以包括体感操作;该操作包括单点控制操作、多点控制操作等操作类型),并根据预先设定的程式驱动相应的连接装置。需要说明的是,触控面板111还可以包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位、姿势,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成处理器101能够处理的信息,再传送给处理器101,并且,还能接收处理器101发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板111,也可以采用未来发展的任何技术实现触控面板111。一般情况下,触控面板111可覆盖显示面板112,用户可以根据显示面板112显示的内容(该显示内容包括但不限于软键盘、虚拟鼠标、虚拟按键、图标等),在显示面板112上覆盖的触控面板111上或者附近进行操作,触控面板111检测到在其上或附近的操作后,传送给处理器101以确定用户输入,随后处理器101根据用户输入,在显示面板112上提供相应的视觉输出。虽然在图1中,触控面板111与显示面板112是作为两个独立的部件来实现终端100的输入和输出功能,但是在某些实施例中,可以将触控面板111与显示面板112集成,以实现终端100的输入和输出功能。
RF电路104、扬声器106,话筒107可提供用户与终端100之间的音频接口。音 频电路105可将接收到的音频数据转换后的信号,传输到扬声器106,由扬声器106转换为声音信号输出;另一方面,话筒107可以将收集的声音信号转换为信号,由音频电路105接收后转换为音频数据,再将音频数据输出至RF电路104以发送给诸如另一终端的设备,或者将音频数据输出至存储器102,以便处理器101结合存储器102中存储的内容进行进一步的处理。另外,摄像头103可以实时采集图像帧,并传送给处理器101处理,并将处理后的结果存储至存储器102和/或将处理后的结果通过显示面板112呈现给用户。
处理器101是终端100的控制中心,利用各种接口和线路连接整个终端100的各个部分,通过运行或执行存储在存储器102内的软件程序和/或模块,以及调用存储在存储器102内的数据,执行终端100的各种功能和处理数据,从而对终端100进行整体监控。需要说明的是,处理器101可以包括一个或多个处理单元;处理器101还可以集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面(User Interface,UI)和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器101中。
终端100还可以包括给各个部件供电的电源114(比如,电池),在本发明实施例中,电源114可以通过电源管理系统与处理器101逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗等功能。
此外,图1中还存在未示出的部件,比如,终端100还可以包括蓝牙模块、传感器等,在此不再赘述。
下面以上述终端100为手机为例,对本发明实施例提供的技术方案进行阐述。
以图2(a)所示的手机为例,在手机的显示界面中包括状态栏201、系统区202、Dock区203和导航栏204。其中,分页标记205位于系统区202中,导航键206、用于触发显示AI功能入口界面的按钮207,即第一按钮,以及用于触发显示场景服务任务界面的按钮208,即第二按钮位于导航栏204中。其中,按钮207和按钮208可以视为非导航按钮的一种可能的实现形式。另外,下文提及的用于触发显示AI功能入口界面和场景服务任务界面的按钮210也可以被视为非导航按钮的一种可能的实现形式。
在本发明实施例中,系统区202用于显示手机已安装的应用程序的图标,以及文件夹。Dock区203用于显示用户期望在每页主界面都能够查看到的应用程序的图标。导航栏204通常情况下,可以显示在任何一个显示界面中,即用户在访问任何界面时,都可以在正在访问的界面中看到导航栏204,并触发导航栏204上的按钮以使得手机执行对应的功能。比如,用户可以通过单击该导航键206,触发手机从当前显示界面返回至上一界面,或是通过长按该导航键,触发手机呈现出主屏幕界面,或是通过左右滑动该导航键所在区域,触发手机呈现近期访问的应用程序等。导航键206也可以是三个导航按钮,分别对应返回上一界面,返回主屏幕和显示近期访问的应用程序;导航键206也可以是两个导航按钮,通过不同的操作,例如单击,双击,长按或压力,悬浮操作等分别实现以上三个功能。其中,近期访问的应用程序指的是截止当前时刻为止的预设时间内访问的应用程序,或者可以理解为,指的是手机在最近一次开机后,即本次开机后,截止到当前时刻为止,所有处于前台和后台运行的应用程序。导航键对应的三个功能是现有技术,此处不再赘述。
由于按钮207和按钮208设置于导航栏204中,因此,按钮207、按钮208,与导航键206类似,均具有全局显示的功能,即无论手机当前处于哪个显示界面,只要在该显示界面中存在导航栏204,那么按钮207和按钮208就与导航键206同时显示。或者可以理解为,即便导航键206被隐藏,但只要在显示界面中能够显示导航栏204,那么按钮207和按钮208就可以显示。
需要说明的是,在本发明实施例中,以导航栏204中包括单个导航键,即能够触发多功能的导航键206为例,对本发明实施例进行阐述。但本发明实施例所采用的技术方案,同样可以适应于其他方式设置的导航栏,比如,在导航栏中包括有三个或两个按键。
考虑到图2(a)所示的导航键206的设置方式,能够有效节省导航栏204的空间,使导航栏204中具备足够的空间放置其他按钮,比如,在导航栏204的空闲区域中,可以设置用于触发显示AI功能入口界面的按钮207,以及用于触发显示场景服务任务界面的按钮208。
如图2(a)所示,在导航栏204中,按钮207和按钮208分别位于导航键206的两侧,以充分利用导航栏204的空闲区域,比如,按钮207位于导航键206的左侧,按钮208位于导航键206的右侧。在本发明实施例中,对按钮207和按钮208的位置不予限定,比如,对于两个按钮分设于导航键206两侧的情况而言,还可以实现为如图2(b)所示的设置方式,即按钮207位于导航键206的右侧,按钮208位于导航键206的左侧。
为了方便用户使用,在用户作用于按钮207后,手机可以向用户呈现包括一个或是多个AI功能入口的AI功能入口界面,即用户可以通过单击、双击、滑动(即向左滑动、向右滑动、向上滑动或是向下滑动等)、压力、长按、大面积手势,以及悬浮触控等操作方式启动导航栏中的按钮。同样的,在用户作用于按钮208后,手机可以向用户呈现包括一个或是多个场景服务任务在内的场景服务任务,供用户访问。即在本发明实施例中,用户作用于按钮207的第二输入和用户作用于按钮208的第三输入,可以相同或是不同。
考虑到诸多AI功能中,用户可能更倾向于使用某一个AI功能,比如,扫一扫功能、搜索功能和语音输入功能中的一项。上述例举的AI功能属于当前使用频次较高或是实用性较强的AI功能,当然,上述所指的某一个AI功能不限于上述例举功能中的一项,还可以为其他被用户认为是较为常用的AI功能。其可以是出厂默认设置,也可以通过用户设置实现或改变,还可以通过对用户使用习惯分析确定等,在此不予限定。
对于这种情况而言,按钮207可以为用户提供一键访问单个AI功能的服务,比如,如图3(a)和图3(b)所示。按钮207显示为扫一扫功能的图标,用户可以更直观的了解到作用于按钮207后能够触发的AI功能。在用户作用于按钮207后,如图3(b)所示,手机可以弹出悬浮窗口209,以向用户呈现扫一扫功能的操作界面,此时,用户可以直接通过手机完成扫码(二维码,条形码等)和码的识别。也就意味着,用户通过作用于按钮207的一步操作,调用出扫一扫功能,以方便用户操作。
再比如,如图4(a)和图4(b)所示。用户可以通过作用于按钮207的一步操作,调用出搜索功能。
再比如,如图5(a)和图5(b)所示。用户可以通过作用于按钮207的一步操作,调用出语音输入功能。
对于用户而言,上述几种可能的按钮207的设置方式,均能够使用户更直观地了解按钮207所能触发的一项AI功能,此时,用户可以结合自身需求,方便、快捷地调出按钮207所对应的一项AI功能。
需要说明的是,对于按钮207所能够实现的功能,可以由用户预先设定,或是手机在出厂前完成设置,至于具体的设置方式,会在后文提出,在此不予赘述。
考虑到按钮208用于触发场景服务任务,而场景服务任务可能随着场景的变化而发生改变,因此,在本发明实施例中,很可能存在当前场景不存在匹配的场景服务任务。那么在这种情况下,用户点击按钮208很可能会调出一个空白界面或是不会调出任何界面。其中,当前场景与场景服务任务的匹配规则包括但不限于:用户从手机提供的多种场景中选择一种或多种或者是当手机的一个或多个预设参数符合预设场景(条件),或者用户操作符合预设条件时,手机可以自动显示对应的场景图标,以提示用户当前场景可用的场景服务任务,或为用户提供与当前场景相关的信息。
为了避免上述调出无效界面情况的发生,在本发明实施例中,当按钮208不存在对应的界面时,或是理解为当与按钮208对应的界面为空白界面时,在导航栏204中可以不向用户呈现按钮208。此时,可以认为按钮208被隐藏,或是在导航栏204中不存在按钮208。这样对于用户而言,就不会因作用于按钮208而调出无效界面。其中,按钮208被隐藏可以理解为,用户可以通过设置的方式在导航栏204中调出按钮208,也可以在按钮208存在对应的非空白界面时,使按钮208自动在导航栏204中显示,在此不予限定。
需要说明的是,为了更直观地向用户展示用户作用于按钮208后得到的呈现效果,在本发明实施例中,按钮208选择性地向用户呈现场景服务任务的内容。例如,场景服务任务包括但不限于航班、火车、酒店、目的地朋友、目的地推荐、休息提醒、会议、快递、运动健康、数据流量报告以及手机使用情况中的一项或是多项。
比如,如图2(a)所示,按钮208在呈现时,显示为一个包括飞机图形的图标。此时,用户可以直观地了解到当前有与飞行或者出行相关的任务或者当前处于飞行相关场景下,用户作用于按钮208后,能够得到与机票相关的场景服务任务。其中,与机票相关的场景服务任务可能会向用户呈现已购买机票对应的航班出发时间、到达时间、出发地、目的地、机场信息、飞行时长、里程以及去机场的交通情况等信息中的至少一项,还可能结合用户当前所处位置,选择性地向用户推送恰当的抵达机场的出行路线、出行方式等,当然还可能关联手机中已有的应用程序,在用户许可的情况下,为用户提供诸如约车,定酒店,推荐目的地联系人信息等便捷服务。
再比如,如图2(b)所示,按钮208在呈现时,显示为一个包括天气图形的图标。此时,用户可以直观的了解到当前作用于按钮208后,能够得到与天气情况相关的场景服务任务。其中,与天气情况相关的场景服务任务可能会向用户呈现当前温度、可吸入颗粒物占比、接下来一段时间可能的温度变化情况等信息中的至少一项,还可能结合诸如手环等与手机关联的其他设备所采集到的用户体感温度等参数,选择性地向用户推荐适合当前天气情况的服装类型等,当然还可能关联手机中已有的应用程序, 在用户许可的情况下,为用户提供诸如开启房间内空气净化器、空调等便捷服务。
再比如,如图3(a)所示,按钮208在呈现时,显示为一个包括餐具图形的图标。此时,用户可以直观地了解到当前作用于按钮208后,能够得到与饮食相关的场景服务任务。其中,与饮食相关的场景服务任务可能会向用户呈现附近提供餐饮的场所、消费水平、推荐的菜品等信息中的至少一项,还可能结合用户当前所处位置,选择性地向用户推送恰当的抵达该场所可选的路线、出行方式等,当然还可能关联手机中已有的应用程序,在用户许可的情况下,为用户推送诸如团购、优惠买单等服务,还可能结合获取到的当前用户的运动信息,为用户提供餐饮附近的停车场、加油站等信息。
再比如,如图4(a)所示,按钮208在呈现时,显示为一个包括闹钟图形的图标。此时,用户可以直观地了解到当前作用于按钮208后,能够得到与日程安排、提醒事项相关的场景服务任务。其中,与日程安排、提醒事项相关的场景服务任务可能会向用户呈现今日未到发生时间的日程安排、提醒事项,同样的,还可能结合用户当前所处位置,选择性地向用户推送恰当的抵达日程安排、提醒事项发生的目的地所能够采用的路线、出行方式以及天气预报等。
再比如,如图5(a)所示,按钮208在呈现时,显示为一个包括礼物图形的图标。此时,用户可以直观地了解到当前作用于按钮208后,能够得到与购物相关的场景服务任务。其中,与购物相关的场景服务任务可能会向用户呈现购物网站、网红产品的链接、目前畅销且用户很可能需要购买的产品、用户加入心愿单的产品存在的优惠条件等信息中的至少一项。
需要说明的是,上述各种实现方式作为一种可能的情况,并不作为对本发明实施例的限定。由此可见,按钮208可以随着场景的变化而发生改变,目的在于使用户更加直观地了解到作用于按钮208后所能够得到的场景服务任务的种类。按钮208的图标可以由用户预先设定,比如,手机为用户提供诸多图标选项,由用户预先为不同种类的场景服务任务设置对应的图标,这样能够使用户在看到图标后就能够了解当前手机试图向用户推荐的场景服务任务的内容。或者,参考诸如应用商店等供用户下载、更新各种不同功能的应用程序的平台中对应用程序进行分类时所采用的图标,将这类被大部分用户认可且公知的图标作为区分不同场景服务任务的图标,从而使大部分用户均能直观地了解手机试图向用户推荐的场景服务任务的内容等。
同样的,按钮207的呈现形式也可能根据当前所处场景的不同而发生变化,而不限于在触发单个AI功能时才呈现出表示该单个AI功能的图标,也就意味着,即便用户作用于按钮207后得到多个AI功能入口,但按钮207的呈现形式仍然可以多样化。在此对于按钮207、按钮208的呈现形式、发生变化的时机、触发变化的条件等,不予限定。上述场景包括但不仅限于手机当前显示界面呈现的内容,还可能包括当前界面所属的应用、用户当前所处位置、当前时间、当前的用户状态等,在此不予限定。其中,手机当前显示界面呈现的内容,可以通过诸如识屏等方式进行识别;当前界面所属的应用可以从应用属性信息处获取或者通过网络查询获得;用户当前所处位置可以结合手机的定位功能等方式进行识别;当前时间可以从手机的时钟所呈现的实时变化的时间来获取;当前的用户状态可以结合手机中用于监控用户健康状态的应用程序来获取,或是通过诸如手环等可穿戴设备所检测到的参数来确定等,在此不予限定。
对于按钮208而言,若场景服务任务的种类包括至少两个,那么按钮208的图标可以基于场景服务任务的种类所对应的优先级,向用户呈现优先级最高的那类场景服务任务对应的图标。其中,场景服务任务不同种类的优先级,可以由用户预先设置,或是在手机出厂时根据大部分用户的使用习惯进行设置,可选择性地向用户提供修改上述优先级的功能,以达到为用户提供更加人性化服务的目的。
当然,手机还可以同时向用户呈现至少两个种类分别对应的图标。其中,呈现形式包括但不限于至少两个图标重叠显示,或是至少两个图标交替显示等,在此不予限定。
以两个图标为例,若两个图标重叠显示,那么可以为两个图标设置两个不同的图层,比如,第一图层显示一个图标,第二图层显示另一个图标。为了更加清晰地向用户呈现显示效果,可以将两个图标设置为反差较大的不同颜色,或是采用一定透明度显示这两个图标,在此不予限定。此外,两个图标还可以部分重叠显示,即一个图标的后半部分与另一个图标的前半部分重叠,比如,一个图标完整显示,而另一个图标位于第二图层,在该一个图标所在的第一图层下方,即另一个图标显示未被该一个图标覆盖的那一部分。
若两个图标交替显示,那么可以预先设置每个图标在交替前后显示的时长,及可以区分设置不同图标单次显示的时长,或是将两个图标单次显示的时长设置为相同,在此不予限定。即在一段时间内显示第一图标,而在相邻的另一段时间内显示第二图标,之后再显示第一图标,依次类推,实现交替显示。需要说明的是,每个图标单次显示的时长可以根据该图标对应场景服务任务的优先级进行设置,优先级可以由用户根据历史经验值或自己的主观意识预先设置,在此不予限定。
在上述示例的情况中,位于导航栏204的按钮207与按钮208分别对应一个功能,即若用户试图调出按钮207与按钮208各自对应的功能,需要分别执行作用于按钮207和作用于按钮208的操作。
为了进一步简化用户的操作,节省导航栏204中被占用的空间,在一种可能的实现方式中,可以将用于触发显示AI功能入口界面的按钮207与用于触发显示场景服务任务界面的按钮208集成,即设置一个用于触发显示AI功能入口界面和场景服务任务界面的按钮210,如图6(a)或图6(b)所示。当然,导航栏上也可以只设置按钮207和208中的一个。
按钮210的设置方式与按钮207、按钮208类似,可以参考前文对按钮207、按钮208的描述,在此不予赘述。同样的,作用于按钮210的输入,与前文第二输入、第三输入类似,在此不予赘述。需要说明的是,在用户作用于按钮210后,可以调出AI功能入口界面和场景服务任务界面。同理,对于按钮210而言,也可以采用诸如上述可变图标的形式呈现,具体实现方式可以参考前文的描述内容,在此不予赘述。
在本发明实施例中,无论是按钮207、按钮208还是按钮210,考虑到导航栏204占用显示界面的空间较小,因此,上述各个按钮在设置时可以以小图标的形式呈现。即上述各个按钮的图标小于系统区202呈现的应用程序的图标及文件夹,当然,也小于Dock区中各个应用程序快捷方式的图标。也就意味着,采用小图标的设计方式来设置上述各个按钮,能够有效节省显示界面中的空间。当然,导航栏中各个按钮也可以 以正常图标大小显示;各图标大小也可以不一致,本发明对此不予以限制。对于本发明实施例而言,导航栏204本身存在于显示界面中,在导航栏204中设置上述各个按钮,并不会占用显示界面除导航栏204以外的显示空间。尤其对于全面屏的手机而言,可以更充分的利用显示界面,从而在不占用额外显示空间的情况下,给用户提供更加便捷的操作方式。
以图2(a)或图2(b)所示的情况为例,用户可以通过单击、双击、长按等方式,作用于按钮207或按钮208,从而触发手机显示与按钮对应的界面。考虑到AI功能入口界面中可能存在多个AI功能入口,为了进一步方便用户使用,还可以定义在按钮207所在区域向左滑动、向右滑动、向上滑动等操作,分别触发不同的AI功能。
比如,以图2(a)为例,用户在按钮207所在区域向左滑动后,弹出如图3(b)所示的悬浮窗口,即实现用户一键调出扫一扫功能;用户在按钮207所在区域向右滑动后,弹出如图4(b)所示的悬浮窗口,即实现用户一键调出搜索功能;用户在按钮207所在区域向上滑动后,弹出如图5(b)所示的悬浮窗口,即实现用户一键调出语音输入功能。需要说明的是,上述操作方式作为一种可能的示例,并不作为对本发明实施例的限定。
对于设置按钮210的情况而言,用户可以通过单击、双击、长按,压力等方式,作用于按钮210,从而触发手机显示与按钮对应的界面,即AI功能入口界面和场景服务界面。为了对上述两个功能进行区分,还可以定义在按钮210所在区域向左滑动、向右滑动等操作,分别触发不同的功能。
比如,以图6(a)或图6(b)所示的情况为例,用户在按钮210所在区域向左滑动后,弹出悬浮窗口,即实现用户一键调出AI功能入口界面;用户在按钮210所在区域向右滑动后,将当前显示界面切换至场景服务任务对应的界面,即实现用户一键调出场景服务任务界面。由此可见,对于设置单个按钮触发两个功能的情况而言,用户可以选择调出一个功能或是两个功能。并且,在用户试图调出一个功能时,用户可以采用不同操作选择性调出不同功能。需要说明的是,上述操作方式作为一种可能的示例,并不作为对本发明实施例的限定。
如图7所示,为了保证导航栏204的简洁、美观,还可以不在导航栏204中设置除导航键206以外的按钮,而是在有用户作用于导航栏204后,例如向上滑动后,实现界面切换,以显示AI功能入口和场景服务任务。当然还可以预先设置以分页标记205为滑动操作的起始点执行的向上或是向下滑动操作被视为调用手机显示AI功能入口和/或场景服务任务的触发方式等。比如,以分页标记205位滑动操作的起始点,若用户执行向上滑动操作,弹出悬浮窗口,显示AI功能入口;若用户执行向下滑动操作,将当前显示界面切换至场景服务任务对应的界面,调出场景服务任务;若用户执行长按操作,将当前显示界面切换至场景服务任务对应的界面,并在该界面上方呈现悬浮窗口,以显示AI功能入口界面,或是在实现界面切换之后,在当前显示的界面中同时显示AI功能入口界面和场景服务任务界面。由此可见,上述例举的情况为一种可能的实现方式,并不作为对本发明实施例的限定。
下面结合具体的应用场景,对显示的AI功能入口和/或场景服务任务进行阐述。
以图2(a)或图2(b)所示的情况为例,即在导航栏204中设置除导航键206 以外的两个按钮,即按钮207和按钮208。对于其他按键不同设置的情况,其对应的功能是相同的,不再赘述。若手机当前显示的内容为新闻,那么在用户点击按钮207后,手机向用户呈现与按钮207对应的界面。比如,如图8(a)所示,在当前的新闻界面上弹出悬浮窗口209。其中,悬浮窗口209部分覆盖当前新闻界面,用于向用户呈现AI功能入口。在本发明实施例中,AI功能至少包括语音输入功能、扫一扫功能、搜索功能、识屏功能、应用程序功能的快捷方式和小程序中的一项。
如图8(a)所示的悬浮窗口209中,包括大卡片211所在区域、小卡片212所在区域和固定AI功能入口所在区域213。其中,一个或多个大卡片211可以向用户呈现识屏功能对当前显示界面识别后得到的识别结果,以及与该识别结果存在关联关系的内容;一个或多个小卡片212可以向用户呈现与该识别结果存在关联关系的应用程序的快捷方式,或是应用程序功能的快捷方式,或是小程序,在此不予限定。在固定AI功能入口所在区域213中,可以至少包括扫一扫功能的快捷按钮214、搜索功能的快捷按钮215和语音输入功能的快捷按钮216中的一项。用户可以通过点击按钮214以触发对二维码、条形码等图形的识别;通过点击按钮215以触发对文字、图片等内容的搜索;通过点击按钮216以开启语音口令输入的入口。
上述识屏功能,指的是可以通过识屏的方式识别出当前显示界面呈现的内容,具体的可以通过语义提取识别出当前显示界面中存在的关键字、关键词等,之后通过诸如标签匹配的方式,找到与关键字、关键词对应的应用程序、应用程序功能或是链接等内容,结合上述内容生成卡片向用户呈现。其中,标签匹配指的是关键字、关键词与已有应用程序的对应关系,或是与已有应用程序功能的对应关系,或是通过网页搜索等方式查找到的与关键字、关键词存在关联的内容对应的链接等,在此不予限定。上述各种对应关系,可以由用户预先进行设置,或是结合数据库、中控台存储的匹配关系来确定,在此不予限定。
从实现角度考虑,在通过识屏功能得到关键字、关键词等信息后,可以将得到的信息发送至各个应用程序,由各个应用程序去判断是否满足与关键字、关键词等信息的匹配关系,若满足该匹配关系,应用程序可以自动将自身推送至手机,由手机生成应用程序的快捷方式后呈现。同理,应用程序功能、小程序等也可以采用同样的实现方式,在此不予限定。获得关键词后,也可以自动网络搜索,将搜索结果以关键词/标题加链接的形式呈现。例如当前新闻是关于一项无人驾驶技术的,手机可以自动将搜索到的关于无人驾驶的其他新闻,相关技术文档,图片,报告等以卡片或链接的形式呈现在界面上供用户调用阅读。
在本发明实施例中,大卡片211可以向用户呈现当前显示界面呈现的新闻的链接,用户可以通过点击大卡片211的方式保存该链接至预设的位置,比如,预设的位置可以为当前悬浮界面的部分或全部区域,也可以为手机的负一屏(HIBOARD)或其他位置。即以卡片的形式保存至手机的负一屏,之后用户可以在负一屏对应的界面打开该链接对应的内容继续浏览。其中,负一屏可以被视为方便用户操作的多功能集合的界面,使用户不用打开应用程序就能够获得相应的服务和内容。为了更直观地向用户呈现卡片中记载的内容,在大卡片211中选择性地向用户呈现大卡片211中记载的内容的分类、摘要等内容,在此不予限定。当然用户也可以直接通过单击、双击、长按或是其 他操作打开大卡片211对应的应用程序、应用程序功能、小程序或是链接,以查阅相关内容。
另外,小卡片212可以向用户呈现与当前识屏结果相关的应用程序功能的快捷方式或是小程序。以图8(a)为例,用户可以通过点击播放器小卡片,以打开播放器快捷方式,从而从中搜索与当前新闻相关的视频;或是点击聊天快捷方式,以实现当前新闻内容的分享、探讨等;或是点击记事本快捷方式,以记录当前新闻中的重要内容等。对于手机而言,结合当前的识屏结果,认为播放器、聊天、记事本等应用程序的快捷方式与该新闻的相关性较大,因此生成这类小卡片推送给用户。需要说明的是,相关性的大小可以由用户预先设置,比如,设置某个或是某类关键字、关键词对应的应用程序的快捷方式等,在此不予限定。小卡片中快捷方式或小程序被打开时,其内容可以显示在当前界面而无需跳转界面,也可以跳转到其被打开应用程序对应的界面。用户可以对打开方式进行设定或修改。
需要说明的是,考虑到悬浮窗口209的大小有限,即向用户呈现的内容有限,在本发明实施例中,用户可以通过滑动操作,在悬浮窗口209中左右或是上下滑动,以选择性地呈现悬浮窗口209中的部分或是全部内容。比如,用户按照如图8(a)所示的滑动方向进行滑动,得到如图8(b)所示的内容。
考虑到扫一扫功能、搜索功能以及语音输入功能属于较为常用的AI功能,且通常不会随着场景的变化而发生改变,因此,在本发明实施例中,用于触发上述AI功能的按钮可以固定显示在悬浮窗口中,即按钮214、按钮215以及按钮216的位置一旦确定,可以不随着场景的改变而发生变化。即用户在悬浮窗口209进行滑动时,上述三个按钮的位置不会发生改变。而对于大卡片211、小卡片212而言,上述滑动操作可以使大卡片211和小卡片212同时被滑动。
当然,也可以区分大卡片211和小卡片212所在区域,使悬浮窗口209呈现的内容被划分成多个显示窗口,用户可以单独对每个显示窗口进行操作。比如,用户在大卡片211所在区域的滑动操作,用于控制大卡片211的左右滑动;而用户在小卡片212所在区域的滑动操作,用于控制小卡片212的左右滑动。当然,上述例举的情况为一种可能的实现方式,并不作为对本发明实施例的限定。
对于用户而言,扫一扫功能、搜索功能以及语音输入功能可以在作用于按钮207后调出,而无需用户执行多次操作找到上述各按钮的位置。并且,对于那些对手机操作不熟练或是学习能力较差的用户而言,能够很方便地调出上述各AI功能的触发按钮,以使用户实现上述功能。
除了上述例举的用户查阅新闻的场景,在用户处于其他场景时,悬浮窗口209可以向用户呈现其他基于场景所确定出的向用户推送的内容。
在本发明实施例中,AI功能入口界面的内容可能与应用程序有关,或是与当前显示界面呈现的内容有关,或是与应用程序和当前显示界面呈现的内容有关等,在此不予限定。
比如,在用户使用诸如微信、QQ、短信等社交的应用程序时,若用户调出AI功能入口,那么此时可以基于当前信息交互界面,即聊天界面所呈现的内容,为用户推送适合的内容。此时的对话窗口中,存在诸如场所的文字、图片信息、语音等内容,手 机通过识屏功能,可以识别出文字中记载的场所,或是图片信息中呈现的景物对应的场所,或是通过语音识别的方式从语音中提取场所的名称等相关信息。之后基于该场所,手机时限对该场所的搜索,从而确定与该场所对应的内容,比如,该场所的位置、到达该场所的交通方式、该场所的消费水平等内容。之后基于确定的内容,选择与所确定内容匹配的应用程序、应用程序功能等,从而为用户推送具有团购功能的应用程序的快捷方式,具备约车功能的应用程序的快捷方式等。需要说明的是,上述分析处理及推送过程,还可以仅考虑当前显示界面呈现的内容,即无关于应用程序的类型,手机直接基于当前显示界面呈现的内容完成识屏,以实现内容的搜索及推送。同样的,手机还可以结合应用程序及当前界面呈现的内容完成上述操作,在此不予赘述。
以上述社交的应用程序为例,响应于接收到用户对第一应用界面的导航栏上的第一按钮的预设操作,还可以在第一应用界面的输入框中显示第一推荐信息。其中,第一推荐信息为AI根据第一应用界面上的一个或多个显示对象确定的。该显示对象为文字、语音或图像信息中的至少一项。
也就意味着,对于用户而言,在上述聊天界面中,手机可以基于聊天界面中呈现的内容,比如,对话内容的上下文等,实现语义分析等处理操作,从而为用户推荐用户可能使用到的第一推荐信息。比如,第一推荐信息可以为用户期望输入到输入框中的内容,以回复处于聊天界面对端的用户。这样,用户就可以直接从第一推荐信息中选择期望输入到输入框中的内容,而省去为了回复对端用户而输入文字、语音等信息的操作。尤其是对于诸如手机等输入键盘呈现比例较小,不方便用户输入的设备而言,上述呈现供用户选择的第一推荐信息的方式,可以有效为用户推荐聊天内容、回复内容等信息,从而解决诸如手机等设备输入不方便的问题。
参考上述在输入框中显示第一推荐信息的情况而言,还可以在第一应用界面上悬浮显示第一推荐信息,比如,如图8(a)、图8(b)所示,还可以修改第一应用界面的界面并在修改后的第一应用界面上显示第一推荐信息。比如,将第一应用界面的界面缩放,并将缩放后的内容显示在当前显示界面的上方,之后将第一推荐信息显示在当前显示界面的下方。再比如,将第一应用界面的部分界面显示在当前显示界面的上方,之后将第一推荐信息显示在当前显示界面的下方。上述为两种例举的呈现方式,并不作为对本发明实施例的限定。当然,上述修改第一应用界面的界面,包括但不限于选定第一应用界面中的部分内容,或是对第一应用界面中的部分内容进行调整等。
再比如,在用户正在观看视频的过程中,若用户调出AI功能入口,为了不影响用户观看视频的过程,可以使第一推荐信息与正在播放的视频分屏显示。即当前正在播放的视频窗口被缩放,且在呈现时,占据当前显示界面的大部分区域,而对于剩余的当前显示界面的小部分区域,可以用于显示第一推荐信息。手机可以通过识屏功能,确定出当前视频的类型、名称等信息,之后基于这些信息,为用户推送该视频的相关内容,即第一推荐信息。比如,手机识别到该视屏为一个已上映电影的宣传片,那么手机可以为用户推送具有购票功能的应用程序的快捷方式、已上映该电影的影院信息,比如,影院的位置、票价、电影的播放场次等。此时,用户可以直接通过点击该快捷方式,以实现影院的选择,完成购票操作。对于用户而言,上述推送方式,还可以便于用户了解当前观看的视频对应电影的相关信息,比如,影评信息等。
再比如,在用户拍照过程中,若用户调出AI功能入口,那么此时可以基于预览画面中的内容,为用户推送相关内容,即第一推荐信息。为了保证成像效果,在本发明实施例中,可以使第一推荐信息悬浮显示在当前应用界面之上。其中,预览画面中的内容包括但不限于文字、景物、食物及任务中的至少一项。比如,画面中的内容为长城,那么手机可以向用户推送与长城有关的历史信息,比如,长城的由来、建立时间等。当然,考虑到长城属于名胜古迹之一,还可以向用户推送诸如十三陵等其他著名景点的相关信息,供用户查阅。
以上述拍照过程为例,手机识别到当前一直处于预览画面的阶段,那么可以认为用户当前有拍摄的趋势,但还未完成拍摄。此时,为了使用户拍摄出效果相对较好的图像,可以为用户提供摄影技巧提示。其中,摄影技巧提示的内容包括但不限于被拍摄人物的位置、姿势,取景时的景深,按压快门的时间,选取的滤镜模式等诸多内容中的至少一项。其中,滤镜模式包括但不限于人像模式、微距模式,以及运动模式等模式中的一项。
上述显示出来的信息的获取,可以是从用户保存/收藏的内容中提取,也可以是从网络上获取。例如,可以优先从用户保存/收藏的内容中获取与当前取景界面中对象相关的信息并显示。在用户保存/收藏内容中不存在相关信息时,从网络上搜索获取相关信息。也可以既从用户保存/收藏/浏览过的内容中获取也从网络中获取,之后全部呈现在显示界面中,或者根据用户选择呈现用户指定的信息。
其中,用户保存/收藏/浏览的内容可以是保存在当前终端内部的,即保存在当前用户操作的手机本地的内容,或者是同一账号下保存在其他终端中的内容,还可以是保存在云端的内容等,在此不予限定。用户浏览的内容可以是保存在各个终端中的,也可以是保存在各服务器中的内容。可以是用户上网历史记录之类。其中,服务器属于本文所提及的电子设备的一种。
再比如,在用户通过手机玩游戏的过程中,若用户调出AI功能入口,那么此时可以基于当前所呈现的游戏界面,为用户推送相关内容。比如,当前显示的为一个角色的介绍界面。此时,手机可以通过识屏的方式,识别到该角色的信息,之后通过搜索等方式,从网络侧查找到该角色的操作方式、其他玩家为该角色配备的状态,以及该角色适应的对战阵容等内容后,向用户推送,具体可以呈现为大卡片或是小卡片的形式等。为了使用户快速了解该角色的操作方式,还可以向用户推送大神玩家的操作视频,供用户查看等。
再比如,在用户播放音乐的过程中,若用户调出AI功能入口,可以向用户推送该音乐的创作背景、与该音乐风格类似的音乐列表、该音乐演唱者的其他作品等。再比如,在用查看图片时,若用户调出AI功能入口,可以向用户推送几款处理图片效果较好的应用程序、识别出图片的拍摄地点以向用户提供拍摄地点的相关信息等。再比如,在用户使用导航的过程中,若用户调出AI功能入口,可以向用户推送手机中已安装的其他导航软件的快捷方式等。
需要说明的是,若第一推荐信息为网络地址链接,即在用户调出AI功能入口界面后向用户呈现的内容包括网络地址链接。那么在用户对该网络地址链接执行诸如点击、滑动等预设操作后,手机响应于用户对该网络地址链接的预设操作,在当前显示界面 上,即在第一应用界面上,显示该网络地址链接指向的内容。
由此可见,基于不同场景,AI功能入口可以产生多样的变化形式,从而为用户提供更加优质的服务。
以图2(a)或图2(b)所示的情况为例,即在导航栏204中设置除导航键206以外的两个按钮,即按钮207和按钮208。若用户作用于按钮208,手机向用户呈现与按钮208对应的界面,比如,如图9所示。
在如图9所示的显示界面中,包括状态栏201、导航栏204和位置信息217。其中,位置信息217为手机基于当前场景,通过诸如定位功能等确定出的用户当前所处位置,比如,位置信息可以为“靠近办公区”。此时,手机向用户呈现的内容包括但不限于与办公相关的内容,以及用户通常情况下在12:50所处的时间范围内,在靠近办公区时所访问的内容。比如,签到卡片、新闻卡片以及会议日程卡片。对于用户而言,在午休时间,即12:50所处的时间范围内,用户通常会访问新闻应用程序、查看会议日程,而在办公区附近时,用户通常会签到。因此,在本发明实施例中,会基于用户所在位置和时间,以及日常用户的行为习惯,向用户推送上述内容或应用/小程序。其中,小程序是一种特殊的、不需要下载安装即可使用的应用程序,用户扫一扫或者搜一下即可打开应用程序。用户不用关心是否安装太多应用程序的问题。应用程序将无处不在,随时可用,但又无需安装卸载。
以签到卡片为例,一般情况下,用户需要通过点击等方式打开具备签到功能的应用程序后才能完成签到操作。而对于本发明实施例而言,在场景服务任务界面中,存在签到卡片,用户可以在签到卡片上通过点击等方式实现签到。此时,用户不用再打开具备签到功能的应用程序。上述签到卡片的实现方式可以被视为一个地址链接,即当前用户的点击操作虽然是在签到卡片上执行,却能直接链接到具备签到功能的应用程序上,以实现用户在签到卡片上执行的点击操作,等同于用户打开具备签到功能的应用程序后执行的签到操作。
此外,手机可以选择性地向用户呈现建议性的小卡片,比如,当前用户存在未出行的机票,或是用户当前的会议日程可能存在与当前所在位置的城市不一致的情况下等,手机可以向用户推荐旅行小卡片。该旅行小卡片的作用可以在于为用户提供购买机票、选座等服务。同样的,考虑用户在办公区附近,而用户处于办公区时通常会运行电子邮件这一应用程序,那么手机可以向用户推送电子邮件小卡片,该电子邮件小卡片的作用可以在于供用户实时查收、回复电子邮件等。需要说明的是,上述向用户呈现的内容为一种可能的举例,并不作为对本发明实施例的限定。
对于场景服务业务,由于随着场景的变化可以发生变化。因此,在本发明实施例中,若满足预设条件,可以对场景服务任务进行更新。
比如,若预设条件为当前时间处于预设时间范围,将场景服务任务替换为与预设时间范围对应事件匹配的场景服务任务。结合用户日常访问应用程序的习惯,可以按照时间点或是时间段对场景服务任务进行更新,比如,用户通常在上午8点到10点之间阅读新闻,那么在8点到10点这段时间内,用于查阅新闻的应用程序的快捷方式就可以被推送到场景服务任务。
再比如,若预设条件为当前所在位置处于预设位置范围,将场景服务任务替换为 与预设位置范围对应事件匹配的场景服务任务。结合用户的日程安排、提醒事项等内容中涉及的场所,基于用户当前所处位置,向用户推送场景服务任务。比如,存在未出行的机票,且日程安排中显示机票为今日出行,那么场景服务任务可以基于用户当前所处位置及机场所在位置,为用户提供导航路线、路程所需时间等内容,供用户参考。
再比如,若预设条件为当前运动状态属于预设运动状态,将场景服务任务替换为与预设运动状态匹配的场景服务任务。比如,手机通过传感器识别到用户当前的行驶速度处于驾驶速度范围,那么手机可以认为用户当前处于驾驶状态。若预设运动状态包括驾驶状态,那么在手机确认用户处于驾驶状态后,可以向用户推送驾驶相关信息,比如,当前驾驶速度、剩余油量等。可选性地向用户推送当前各路线的路况信息等,在此不予限定。
需要说明的是,上述例举的各种可能的情况中,可以作为一条单独的预设条件进行考虑,也可以结合至少两条预设条件,完成场景服务任务的推送。也就意味着,在本发明实施例中,场景服务任务的种类可以为一个或是多个,在此不予限定。
也就意味着,对于场景服务任务界面而言,在第一时间,在场景服务任务界面的第一预设位置上显示第三应用程序的快捷方式,响应于接收到用户对第三应用程序的快捷方式的预设操作,在场景服务任务界面上显示第三应用程序对应的界面;在第二时间,在场景服务任务界面的第一预设位置上显示第四应用程序的快捷方式,响应于接收到用户对第四应用程序的快捷方式的预设操作,在场景服务任务界面上显示第四应用程序对应的界面。其中,第三应用程序和第四应用程序是电子设备根据用户使用习惯确定的;第一时间不同于第二时间,第三应用程序不同于第四应用程序。在本发明实施例中,上述场景服务任务的更新,可以更好地为用户提供适应于当前场景的内容。参照上述场景服务任务的更新过程,第二按钮,即按钮208的呈现形式也可以发生改变。比如,在第一时间,第二按钮上显示与第三应用程序对应的内容;在第二时间,第二按钮上显示与第四应用程序对应的内容。
为了节约手机能耗,在本发明实施例中,场景服务任务可以不实时更新,即可以周期性或是定时更新场景服务任务,比如,用户可以提前设置更新场景服务任务的时间点;或是在用户当前的操作满足预设的触发条件后对场景服务任务进行更新。比如,用户在一段时间内访问某一应用程序的次数超过预设次数,那么认为用户近期可能需要多次访问该应用程序,此时,可以将该应用程序的快捷方式设置在场景服务任务中,供用户方便使用等。此外,手机可以在用户每次点亮屏幕和/或解锁屏幕后,更新场景服务任务等,在此不予限定。
以图6(a)、图6(b)或图7所示的情况为例,在显示界面切换显示为包括AI功能入口界面和场景服务任务界面在内的显示界面时,可以呈现为如图10所示的内容。考虑到在同一显示界面中呈现AI功能入口界面和场景服务任务界面时,很可能由于需要呈现的内容过多而很难全部显示,因此,在本发明实施例中,在显示界面中还可以设置滚动条218。用户可以通过滑动滚动条218,实现场景服务任务的浏览。当然,在当前显示界面中,可以设置用于控制整个显示界面的滚动条;也可以基于两个不同的功能,分别设置用于控制场景服务任务的滚动条,以及用于控制AI功能入口的滚动条; 或者,在不设置滚动条的情况下,默认用户的滑动操作为控制界面翻页或是上下、左右移动的操作方式。
对于用户而言,无论是按钮207、按钮208或是按钮210,均可以由用于对于上述各按钮的功能选择性的开启或是关闭。比如,在如图11(a)所示的设置界面中,包括关于导航栏的设置选项。用户可以通过点击的方式打开导航栏的设置界面,即如图11(b)所示。在导航栏的设置界面中,用户可以选择性地开启AI功能入口和场景服务任务中的一个或是多个,当然,用户也可以选择不开启上述两个功能。
以用户开启AI功能入口为例,在导航栏204中,呈现按钮207,以使用户作用于按钮207后,向用户呈现悬浮窗口,以使用户触发各种AI功能。同理,场景服务任务的功能,也可以在导航栏设置界面中开启,操作方式与开启AI功能入口类似,在此不予赘述。
需要说明的是,用于供用户选择性开启AI功能入口界面及场景服务任务界面的按钮的方式,不限于上述例举的操作方式,用户还可以通过其他界面完成设置操作。当然,手机在出厂时也可以默认按钮207、按钮208与导航键206同时呈现,或是按钮210与导航键206同时呈现,在此不予限定。
同样的,对于用户而言,在确定开启AI功能入口的情况下,对于基础AI功能,用户也可以选择是否开启。其中,基础AI功能,包括但不仅限于上述搜索功能、扫一扫功能以及语音输入功能中的一项或是多项。以扫一扫功能为例,若用户关闭扫一扫功能,那么以图10为例,在固定AI功能入口所在区域213中不存在按钮214。
在上述终端中可以设置有控制装置,控制装置为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
本发明实施例所涉及的各个控制装置都用于实现上述方法实施例中的方法。本发明实施例可以根据上述方法示例对控制装置进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本发明实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
如图12所示,为上述实施例中所涉及的控制装置的一种可能的结构示意图。控制装置30包括:显示模块31、接收模块32和处理模块33。其中,显示模块31用于支持控制装置30实现第一界面、AI功能入口界面、服务场景任务界面的显示,以及本发明实施例中涉及到的诸如第一按钮、第二按钮等非导航按钮、导航键等功能按键的显示等;接收模块32用于支持控制装置30接收第一输入、第二输入及第三输入等,还可以为用户作用于显示界面上呈现的任何内容的输入操作,或是用户作用于硬按键上的输入操作等;处理模块33用于支持控制装置30对显示界面中呈现的内容进行诸如语义分析、提取关键字等操作,和/或用于本文所描述的技术的其它过程。在本发明实施例中,控 制装置30还包括:通信模块34,用于支持控制装置30与终端中各个模块之间进行数据交互,和/或支持终端与诸如服务器等其他设备之间的通信;存储模块35用于支持控制装置30存储终端的程序代码和数据。
其中,处理模块33可以实现为处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),通用处理器,数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。其可以实现或执行结合本发明公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。通信模块34可以实现为收发器、收发电路或通信接口等。存储模块35可以实现为存储器。
若显示模块31实现为显示器、处理模块33实现为处理器、接收模块32和通信模块34实现为收发器、存储模块35实现为存储器,则如图13所示,终端40包括:处理器41、收发器42、存储器43、显示器44,以及总线45。其中,处理器41、收发器42、存储器43、显示器44通过总线45相互连接;总线45可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等。所述总线可以分为地址总线、数据总线、控制总线等。为便于表示,图13中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
结合本发明公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以部署在同一设备中,或者,处理器和存储介质也可以作为分立组件部署在于不同的设备中。
本发明实施例提供一种芯片,模组或装置,用于实现上述方法实施例中的方法,具体的指示与上述控制装置连接的显示器、处理器、输入设备执行本发明实施例涉及的控制方法所实现的各个功能。
本发明实施例提供一种可读存储介质,该可读存储介质中存储有指令,当指令在终端上运行时,使得终端执行上述方法实施例中的任意一项方法。
本发明实施例提供一种计算机程序产品,计算机程序产品包括软件代码,软件代码用于执行上述方法实施例中的任意一项方法。
以上所述的具体实施方式,对本发明实施例的目的、技术方案和有益效果进行了进一步详细说明,所应理解的是,以上所述仅为本发明的具体实施方式而已,并不用于限定本发明的保护范围,凡在本发明实施例的技术方案的基础之上,所做的任何修 改、等同替换、改进等,均应包括在本发明实施例的保护范围之内。

Claims (48)

  1. 一种控制方法,由电子设备执行,其特征在于,所述方法包括:
    显示第一界面,所述第一界面中包含导航栏,所述导航栏设置有导航键和至少一个非导航按钮,其中,所述导航键用于在被触发时所述电子设备执行返回上一界面、跳转至主界面和调出截止当前时刻为止的预设时间内访问的应用程序的界面中的至少一项,所述至少一个非导航按钮用于在被触发时所述电子设备执行显示人工智能AI功能入口界面和场景服务任务界面中的至少一项;
    接收用户作用于一个所述非导航按钮的第一输入;
    响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项。
  2. 根据权利要求1所述的方法,其特征在于,所述至少一个非导航按钮为一个按钮,所述响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项,包括:
    响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面。
  3. 根据权利要求1所述的方法,其特征在于,所述至少一个非导航按钮为两个按钮,所述接收用户作用于所述非导航按钮的第一输入;响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项,包括:
    接收用户作用于第一按钮的第二输入,响应于所述第二输入,显示与所述第一按钮对应的所述AI功能入口界面;
    接收用户作用于第二按钮的第三输入,响应于所述第三输入,显示与所述第二按钮对应的所述场景服务任务界面。
  4. 根据权利要求3所述的方法,其特征在于,所述响应于所述第二输入,显示与所述第一按钮对应的所述AI功能入口界面,包括:
    响应于所述第二输入,所述AI功能入口界面悬浮显示在所述第一界面上。
  5. 根据权利要求3所述的方法,其特征在于,所述响应于所述第三输入,显示与所述第二按钮对应的所述场景服务任务界面,包括:
    响应于所述第三输入,将所述第一界面切换显示为所述场景服务任务界面。
  6. 根据权利要求3至5中任意一项所述的方法,其特征在于,所述第一界面为第一应用界面,所述响应于所述第二输入,显示与所述第一按钮对应的所述AI功能入口界面,包括:
    响应于接收到用户对所述第一应用界面的导航栏上的第一按钮的预设操作,在所述第一应用界面上显示第一推荐信息,所述第一推荐信息为AI根据所述第一应用界面上显示的一个或多个显示对象确定的,其中,所述显示对象为文字、语音或图像信息中的至少一项。
  7. 根据权利要求6所述的方法,其特征在于,所述在所述第一应用界面上显示第一推荐信息具体为以下情况中的至少一种:
    在所述第一应用界面的输入框中显示第一推荐信息;在所述第一应用界面上悬浮显示第一推荐信息;修改第一应用界面的界面并在修改后的第一应用界面上显示第一 推荐信息。
  8. 根据权利要求6或7所述的方法,其特征在于,所述第一推荐信息为网络地址链接,文字,图片或表情中的至少一种。
  9. 根据权利要求8所述的方法,其特征在于,所述第一推荐信息为网络地址链接,在所述第一应用界面上显示第一推荐信息之后,所述方法进一步包括:
    响应于用户对所述网络地址链接的预设操作,在所述第一应用界面上显示所述网络地址链接指向的内容。
  10. 根据权利要求9所述的方法,其特征在于,所述第一应用界面为取景界面,所述第一推荐信息为显示在所述第一应用界面上的一个或多个显示对象对应的信息,所述显示对象为图像信息。
  11. 根据权利要求1至10中任意一项所述的方法,其特征在于,所述AI功能入口界面还包括语音、图像和文字搜索,以及保存功能按钮中的至少一项。
  12. 根据权利要求1至11中任意一项所述的方法,其特征在于,所述响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面,包括:
    响应于所述第一输入,对所述第一界面上的内容进行语义分析,提取一个或多个关键字,显示包含有特定信息的AI功能入口界面,所述特定信息为与提取的关键字对应的信息。
  13. 根据权利要求1至12中任意一项所述的方法,其特征在于,所述场景服务任务界面,包括:
    在第一时间,在所述场景服务任务界面的第一预设位置上显示第三应用程序的快捷方式,响应于接收到用户对所述第三应用程序的快捷方式的预设操作,在所述场景服务任务界面上显示所述第三应用程序对应的界面;
    在第二时间,在所述场景服务任务界面的第一预设位置上显示第四应用程序的快捷方式,响应于接收到用户对所述第四应用程序的快捷方式的预设操作,在所述场景服务任务界面上显示所述第四应用程序对应的界面;
    其中,所述第三应用程序和第四应用程序是所述电子设备根据用户使用习惯确定的;所述第一时间不同于所述第二时间,所述第三应用程序不同于所述第四应用程序。
  14. 根据权利要求13所述的方法,其特征在于,所述方法进一步包括:
    在所述第一时间,所述第二按钮上显示与所述第三应用程序对应的内容;
    在所述第二时间,所述第二按钮上显示与所述第四应用程序对应的内容。
  15. 根据权利要求1至5中任意一项所述的方法,其特征在于,所述第一界面为主界面,所述第一界面还包括停靠Dock区,所述Dock区用于放置应用程序的快捷方式。
  16. 一种控制装置,由电子设备执行,其特征在于,所述装置包括:
    显示模块,用于显示第一界面,所述第一界面中包含导航栏,所述导航栏设置有导航键和至少一个非导航按钮,其中,所述导航键用于在被触发时所述电子设备执行返回上一界面、跳转至主界面和调出截止当前时刻为止的预设时间内访问的应用程序的界面中的至少一项,所述至少一个非导航按钮用于在被触发时所述电子设备执行显示人工智能AI功能入口界面和场景服务任务界面中的至少一项;
    接收模块,用于接收用户作用于所述显示模块显示的一个所述非导航按钮的第一输入;
    所述显示模块,还用于响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项。
  17. 根据权利要求16所述的装置,其特征在于,所述至少一个非导航按钮为一个按钮,所述显示模块,还用于响应于所述第一输入,显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面。
  18. 根据权利要求16所述的装置,其特征在于,所述至少一个非导航按钮为两个按钮,所述接收模块,还用于接收用户作用于第一按钮的第二输入;
    所述显示模块,还用于响应于所述第二输入,显示与所述第一按钮对应的所述AI功能入口界面;
    所述接收模块,还用于接收用户作用于第二按钮的第三输入;
    所述显示模块,还用于响应于所述第三输入,显示与所述第二按钮对应的所述场景服务任务界面。
  19. 根据权利要求18所述的装置,其特征在于,所述显示模块,还用于响应于所述第二输入,所述AI功能入口界面悬浮显示在所述第一界面上。
  20. 根据权利要求18所述的装置,其特征在于,所述显示模块,还用于响应于所述第三输入,将所述第一界面切换显示为所述场景服务任务界面。
  21. 根据权利要求18至20中任意一项所述的装置,其特征在于,所述第一界面为第一应用界面,所述显示模块,还用于响应于接收到用户对所述第一应用界面的导航栏上的第一按钮的预设操作,在所述第一应用界面上显示第一推荐信息,所述第一推荐信息为AI根据所述第一应用界面上显示的一个或多个显示对象确定的,其中,所述显示对象为文字、语音或图像信息中的至少一项。
  22. 根据权利要求21所述的装置,其特征在于,所述在所述第一应用界面上显示第一推荐信息具体为以下情况中的至少一种:
    在所述第一应用界面的输入框中显示第一推荐信息;在所述第一应用界面上悬浮显示第一推荐信息;修改第一应用界面的界面并在修改后的第一应用界面上显示第一推荐信息。
  23. 根据权利要求21或22所述的装置,其特征在于,所述第一推荐信息为网络地址链接,文字,图片或表情中的至少一种。
  24. 根据权利要求23所述的装置,其特征在于,所述第一推荐信息为网络地址链接,所述显示模块,还用于响应于用户对所述网络地址链接的预设操作,在所述第一应用界面上显示所述网络地址链接指向的内容。
  25. 根据权利要求24所述的装置,其特征在于,所述第一应用界面为取景界面,所述第一推荐信息为显示在所述第一应用界面上的一个或多个显示对象对应的信息,所述显示对象为图像信息。
  26. 根据权利要求16至25中任意一项所述的装置,其特征在于,所述AI功能入口界面还包括语音、图像和文字搜索,以及保存功能按钮中的至少一项。
  27. 根据权利要求16至26中任意一项所述的装置,其特征在于,所述装置进一 步包括:
    处理模块,用于响应于所述第一输入,对所述第一界面上的内容进行语义分析,提取一个或多个关键字;
    所述显示模块,还用于显示包含有特定信息的AI功能入口界面,所述特定信息为与提取的关键字对应的信息。
  28. 根据权利要求16至27中任意一项所述的装置,其特征在于,所述场景服务任务界面,包括:
    在第一时间,在所述场景服务任务界面的第一预设位置上通过所述显示模块显示第三应用程序的快捷方式,响应于接收到用户对所述第三应用程序的快捷方式的预设操作,在所述场景服务任务界面上通过所述显示模块显示所述第三应用程序对应的界面;
    在第二时间,在所述场景服务任务界面的第一预设位置上通过所述显示模块显示第四应用程序的快捷方式,响应于接收到用户对所述第四应用程序的快捷方式的预设操作,在所述场景服务任务界面上通过所述显示模块显示所述第四应用程序对应的界面;
    其中,所述第三应用程序和第四应用程序是所述电子设备根据用户使用习惯确定的;所述第一时间不同于所述第二时间,所述第三应用程序不同于所述第四应用程序。
  29. 根据权利要求28所述的装置,其特征在于,所述显示模块,还用于在所述第一时间,所述第二按钮上显示与所述第三应用程序对应的内容;在所述第二时间,所述第二按钮上显示与所述第四应用程序对应的内容。
  30. 根据权利要求16至20中任意一项所述的装置,其特征在于,所述第一界面为主界面,所述第一界面还包括停靠Dock区,所述Dock区用于放置应用程序的快捷方式。
  31. 一种装置,由电子设备执行,其特征在于,所述装置包括;
    指示与所述装置连接的显示器显示第一界面,所述第一界面中包含导航栏,所述导航栏设置有导航键和至少一个非导航按钮,其中,所述导航键用于在被触发时所述电子设备执行返回上一界面、跳转至主界面和调出截止当前时刻为止的预设时间内访问的应用程序的界面中的至少一项,所述至少一个非导航按钮用于在被触发时所述电子设备执行显示人工智能AI功能入口界面和场景服务任务界面中的至少一项;
    指示与所述装置连接的输入设备接收用户作用于一个所述非导航按钮的第一输入的信号;
    响应于所述第一输入的信号,指示所述显示器显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面中的至少一项。
  32. 根据权利要求31所述的装置,其特征在于,所述至少一个非导航按钮为一个按钮,响应于所述第一输入的信号,指示所述显示器显示与所述非导航按钮对应的AI功能入口界面和场景服务任务界面。
  33. 根据权利要求31所述的装置,其特征在于,所述至少一个非导航按钮为两个按钮,指示所述输入设备接收用户作用于第一按钮的第二输入的信号,响应于所述第二输入的信号,指示所述显示器显示与所述第一按钮对应的所述AI功能入口界面;
    指示所述输入设备接收用户作用于第二按钮的第三输入的信号,响应于所述第三输入的信号,指示所述显示器显示与所述第二按钮对应的所述场景服务任务界面。
  34. 根据权利要求33所述的装置,其特征在于,响应于所述第二输入的信号,指示所述显示器将所述AI功能入口界面悬浮显示在所述第一界面上。
  35. 根据权利要求33所述的装置,其特征在于,响应于所述第三输入的信号,指示所述显示器将所述第一界面切换显示为所述场景服务任务界面。
  36. 根据权利要求33至35中任意一项所述的装置,其特征在于,所述第一界面为第一应用界面,响应于接收到用户对所述第一应用界面的导航栏上的第一按钮的预设操作的信号,指示所述显示器在所述第一应用界面上显示第一推荐信息,所述第一推荐信息为AI根据所述第一应用界面上显示的一个或多个显示对象确定的,其中,所述显示对象为文字、语音或图像信息中的至少一项。
  37. 根据权利要求36所述的装置,其特征在于,指示所述显示器在所述第一应用界面的输入框中显示第一推荐信息;和/或,指示所述显示器在所述第一应用界面上悬浮显示第一推荐信息;和/或,指示与所述装置连接的处理器修改第一应用界面的界面,并指示所述显示器在修改后的第一应用界面上显示第一推荐信息。
  38. 根据权利要求36或37所述的装置,其特征在于,所述第一推荐信息为网络地址链接,文字,图片或表情中的至少一种。
  39. 根据权利要求38所述的装置,其特征在于,所述第一推荐信息为网络地址链接,响应于用户对所述网络地址链接的预设操作的信号,指示所述显示器在所述第一应用界面上显示所述网络地址链接指向的内容。
  40. 根据权利要求39所述的装置,其特征在于,所述第一应用界面为取景界面,所述第一推荐信息为显示在所述第一应用界面上的一个或多个显示对象对应的信息,所述显示对象为图像信息。
  41. 根据权利要求31至40中任意一项所述的装置,其特征在于,所述AI功能入口界面还包括语音、图像和文字搜索,以及保存功能按钮中的至少一项。
  42. 根据权利要求31至41中任意一项所述的装置,其特征在于,响应于所述第一输入的信号,指示与所述装置连接的处理器对所述第一界面上的内容进行语义分析,提取一个或多个关键字,指示所述显示器显示包含有特定信息的AI功能入口界面,所述特定信息为与提取的关键字对应的信息。
  43. 根据权利要求31至42中任意一项所述的装置,其特征在于,所述场景服务任务界面,包括:
    在第一时间,指示所述显示器在所述场景服务任务界面的第一预设位置上显示第三应用程序的快捷方式,响应于接收到用户对所述第三应用程序的快捷方式的预设操作的信号,指示所述显示器在所述场景服务任务界面上显示所述第三应用程序对应的界面;
    在第二时间,指示所述显示器在所述场景服务任务界面的第一预设位置上显示第四应用程序的快捷方式,响应于接收到用户对所述第四应用程序的快捷方式的预设操作的信号,指示所述显示器在所述场景服务任务界面上显示所述第四应用程序对应的界面;
    其中,所述第三应用程序和第四应用程序是所述电子设备根据用户使用习惯确定的;所述第一时间不同于所述第二时间,所述第三应用程序不同于所述第四应用程序。
  44. 根据权利要求43所述的装置,其特征在于,在所述第一时间,指示所述显示器在所述第二按钮上显示与所述第三应用程序对应的内容;
    在所述第二时间,指示所述显示器在所述第二按钮上显示与所述第四应用程序对应的内容。
  45. 根据权利要求31至35中任意一项所述的装置,其特征在于,所述第一界面为主界面,所述第一界面还包括停靠Dock区,所述Dock区用于放置应用程序的快捷方式。
  46. 一种终端,包括显示屏,存储器,一个或多个处理器,多个应用程序,以及一个或多个程序;其中,所述一个或多个程序被存储在所述存储器中;其特征在于,所述一个或多个处理器在执行所述一个或多个程序时,使得所述终端实现如权利要求1至15中任意一项所述的方法。
  47. 一种可读存储介质,其特征在于,所述可读存储介质中存储有指令,当所述指令在终端上运行时,使得所述终端执行上述权利要求1至15中任意一项所述的方法。
  48. 一种计算机程序产品,其特征在于,所述计算机程序产品包括软件代码,所述软件代码用于执行上述权利要求1至15中任意一项所述的方法。
PCT/CN2017/117585 2017-12-20 2017-12-20 一种控制方法及装置 WO2019119325A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
PCT/CN2017/117585 WO2019119325A1 (zh) 2017-12-20 2017-12-20 一种控制方法及装置
CN201780089422.2A CN110494835A (zh) 2017-12-20 2017-12-20 一种控制方法及装置
US16/956,663 US11416126B2 (en) 2017-12-20 2017-12-20 Control method and apparatus
US17/862,816 US20230004267A1 (en) 2017-12-20 2022-07-12 Control Method and Apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/117585 WO2019119325A1 (zh) 2017-12-20 2017-12-20 一种控制方法及装置

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US16/956,663 A-371-Of-International US11416126B2 (en) 2017-12-20 2017-12-20 Control method and apparatus
US17/862,816 Continuation US20230004267A1 (en) 2017-12-20 2022-07-12 Control Method and Apparatus

Publications (1)

Publication Number Publication Date
WO2019119325A1 true WO2019119325A1 (zh) 2019-06-27

Family

ID=66994328

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117585 WO2019119325A1 (zh) 2017-12-20 2017-12-20 一种控制方法及装置

Country Status (3)

Country Link
US (2) US11416126B2 (zh)
CN (1) CN110494835A (zh)
WO (1) WO2019119325A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111638850A (zh) * 2020-05-29 2020-09-08 维沃移动通信有限公司 响应方法、响应装置及电子设备
CN114721279A (zh) * 2021-01-05 2022-07-08 深圳绿米联创科技有限公司 一种基于悬浮窗的智能家居控制方法和终端设备
EP4002075A4 (en) * 2019-07-19 2022-09-14 Tencent Technology (Shenzhen) Company Limited METHOD AND APPARATUS FOR INTERFACE DISPLAY, TERMINAL AND STORAGE MEDIA

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113238689A (zh) * 2021-05-12 2021-08-10 西安闻泰电子科技有限公司 一种交互方法、装置、终端设备和计算机可读存储介质
CN113888159B (zh) * 2021-06-11 2022-11-29 荣耀终端有限公司 一种应用的功能页面的开启方法和电子设备
CN113295180A (zh) * 2021-06-30 2021-08-24 北京市商汤科技开发有限公司 一种乘机导航方法、装置、计算机设备和存储介质
WO2023005362A1 (zh) * 2021-07-30 2023-02-02 深圳传音控股股份有限公司 处理方法、处理设备及存储介质
CN113791850B (zh) * 2021-08-12 2022-11-18 荣耀终端有限公司 一种信息显示方法及电子设备
EP4421605A1 (en) * 2021-12-03 2024-08-28 Honor Device Co., Ltd. Application recommendation method and electronic device
CN114610199B (zh) * 2022-03-21 2023-04-21 北京明略昭辉科技有限公司 会话消息处理方法、装置、存储介质以及电子设备
CN116420989B (zh) * 2023-06-09 2023-08-22 广州美术学院 一种智能办公工作站

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293472A (zh) * 2016-08-15 2017-01-04 宇龙计算机通信科技(深圳)有限公司 一种虚拟按键处理方法及移动终端
US20170149959A1 (en) * 2013-05-15 2017-05-25 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN107092471A (zh) * 2016-07-27 2017-08-25 阿里巴巴集团控股有限公司 一种功能按钮显示方法和装置
CN107132971A (zh) * 2016-02-29 2017-09-05 福建兑信科技有限公司 一种移动终端操作界面的控制方法、系统及移动终端
CN107305551A (zh) * 2016-04-18 2017-10-31 百度在线网络技术(北京)有限公司 推送信息的方法和装置

Family Cites Families (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7032188B2 (en) * 2001-09-28 2006-04-18 Nokia Corporation Multilevel sorting and displaying of contextual objects
KR101012300B1 (ko) * 2008-03-07 2011-02-08 삼성전자주식회사 터치스크린을 구비한 휴대 단말기의 사용자 인터페이스장치 및 그 방법
US20110087966A1 (en) * 2009-10-13 2011-04-14 Yaniv Leviathan Internet customization system
US20110210922A1 (en) * 2010-02-26 2011-09-01 Research In Motion Limited Dual-screen mobile device
US9310834B2 (en) * 2011-06-30 2016-04-12 Z124 Full screen mode
US9836178B2 (en) * 2011-11-03 2017-12-05 Excalibur Ip, Llc Social web browsing
KR20130107974A (ko) * 2012-03-23 2013-10-02 삼성전자주식회사 플로팅 사용자 인터페이스 제공 장치 및 방법
BR112014025516A2 (pt) * 2012-04-20 2017-08-08 Sony Corp aparelho e método de processamento de informação, e, programa.
CN104781776A (zh) * 2012-11-02 2015-07-15 通用电气智能平台有限公司 用于基于上下文的动态动作的设备和方法
SG11201505062UA (en) 2013-03-27 2015-08-28 Hitachi Maxell Portable information terminal
KR102044701B1 (ko) * 2013-07-10 2019-11-14 엘지전자 주식회사 이동 단말기
KR102202899B1 (ko) * 2013-09-02 2021-01-14 삼성전자 주식회사 복수의 어플리케이션 제공 방법 및 장치
US9881592B2 (en) * 2013-10-08 2018-01-30 Nvidia Corporation Hardware overlay assignment
CN103533244A (zh) 2013-10-21 2014-01-22 深圳市中兴移动通信有限公司 拍摄装置及其自动视效处理拍摄方法
CN114895839A (zh) * 2014-01-06 2022-08-12 华为终端有限公司 应用程序显示方法和终端
CN105005678B (zh) * 2014-04-21 2018-11-23 腾讯科技(深圳)有限公司 资源交换平台获取角色信息的方法和装置
KR20150122510A (ko) 2014-04-23 2015-11-02 엘지전자 주식회사 영상 표시 장치 및 그것의 제어방법
US9338493B2 (en) * 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
EP2980733A1 (en) * 2014-07-31 2016-02-03 Samsung Electronics Co., Ltd Message service providing device and method of providing content via the same
KR102383103B1 (ko) * 2014-08-13 2022-04-06 삼성전자 주식회사 전자 장치 및 이의 화면 표시 방법
KR20160026141A (ko) * 2014-08-29 2016-03-09 삼성전자주식회사 윈도우 운용 방법 및 이를 지원하는 전자 장치
US10097973B2 (en) * 2015-05-27 2018-10-09 Apple Inc. Systems and methods for proactively identifying and surfacing relevant content on a touch-sensitive device
US20170024086A1 (en) * 2015-06-23 2017-01-26 Jamdeo Canada Ltd. System and methods for detection and handling of focus elements
US10169453B2 (en) * 2016-03-28 2019-01-01 Microsoft Technology Licensing, Llc Automatic document summarization using search engine intelligence
CN105975581B (zh) * 2016-05-05 2019-05-17 腾讯科技(北京)有限公司 媒体信息的展示方法、客户端及服务器
KR102547115B1 (ko) * 2016-06-03 2023-06-23 삼성전자주식회사 어플리케이션을 전환하기 위한 방법 및 그 전자 장치
US10452748B2 (en) * 2016-06-20 2019-10-22 Microsoft Technology Licensing, Llc Deconstructing and rendering of web page into native application experience
CN106101544B (zh) * 2016-06-30 2019-06-04 维沃移动通信有限公司 一种图像处理方法及移动终端
WO2018032271A1 (zh) 2016-08-15 2018-02-22 北京小米移动软件有限公司 信息搜索方法、装置、电子设备及服务器
CN106570102B (zh) * 2016-10-31 2021-01-22 努比亚技术有限公司 智能聊天方法、装置及终端
KR102626633B1 (ko) * 2016-11-17 2024-01-18 엘지전자 주식회사 단말기 및 그 제어 방법
US10203982B2 (en) * 2016-12-30 2019-02-12 TCL Research America Inc. Mobile-phone UX design for multitasking with priority and layered structure
US10692494B2 (en) * 2017-05-10 2020-06-23 Sattam Dasgupta Application-independent content translation
CN107256109B (zh) 2017-05-27 2021-03-16 北京小米移动软件有限公司 信息显示方法、装置及终端
CN107315820A (zh) * 2017-07-01 2017-11-03 北京奇虎科技有限公司 基于移动终端的用户交互界面的表情搜索方法及装置
CN107450798A (zh) * 2017-07-21 2017-12-08 维沃移动通信有限公司 一种应用程序的启动方法、装置及移动终端
US10382383B2 (en) * 2017-07-28 2019-08-13 Upheaval LLC Social media post facilitation systems and methods
US10972254B2 (en) * 2017-07-28 2021-04-06 Upheaval LLC Blockchain content reconstitution facilitation systems and methods
CN107544810B (zh) * 2017-09-07 2021-01-15 北京小米移动软件有限公司 控制应用程序的方法和装置
CN107544809B (zh) * 2017-09-07 2021-07-27 北京小米移动软件有限公司 显示页面的方法和装置
WO2019047189A1 (zh) * 2017-09-08 2019-03-14 广东欧珀移动通信有限公司 消息显示方法、装置及终端
US10599878B2 (en) * 2017-11-20 2020-03-24 Ca, Inc. Using decoy icons to prevent unwanted user access to applications on a user computing device
US20190289128A1 (en) * 2018-03-15 2019-09-19 Samsung Electronics Co., Ltd. Method and electronic device for enabling contextual interaction
DK201870353A1 (en) * 2018-05-07 2019-12-04 Apple Inc. USER INTERFACES FOR RECOMMENDING AND CONSUMING CONTENT ON AN ELECTRONIC DEVICE
USD885427S1 (en) * 2018-08-31 2020-05-26 Butterfly Network, Inc. Display panel or portion thereof with graphical user interface
US11017179B2 (en) * 2018-12-28 2021-05-25 Open Text Sa Ulc Real-time in-context smart summarizer

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170149959A1 (en) * 2013-05-15 2017-05-25 Lg Electronics Inc. Mobile terminal and controlling method thereof
CN107132971A (zh) * 2016-02-29 2017-09-05 福建兑信科技有限公司 一种移动终端操作界面的控制方法、系统及移动终端
CN107305551A (zh) * 2016-04-18 2017-10-31 百度在线网络技术(北京)有限公司 推送信息的方法和装置
CN107092471A (zh) * 2016-07-27 2017-08-25 阿里巴巴集团控股有限公司 一种功能按钮显示方法和装置
CN106293472A (zh) * 2016-08-15 2017-01-04 宇龙计算机通信科技(深圳)有限公司 一种虚拟按键处理方法及移动终端

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4002075A4 (en) * 2019-07-19 2022-09-14 Tencent Technology (Shenzhen) Company Limited METHOD AND APPARATUS FOR INTERFACE DISPLAY, TERMINAL AND STORAGE MEDIA
US11816305B2 (en) 2019-07-19 2023-11-14 Tencent Technology (Shenzhen) Company Limited Interface display method and apparatus, and storage medium
CN111638850A (zh) * 2020-05-29 2020-09-08 维沃移动通信有限公司 响应方法、响应装置及电子设备
CN114721279A (zh) * 2021-01-05 2022-07-08 深圳绿米联创科技有限公司 一种基于悬浮窗的智能家居控制方法和终端设备

Also Published As

Publication number Publication date
US20200409520A1 (en) 2020-12-31
US11416126B2 (en) 2022-08-16
CN110494835A (zh) 2019-11-22
US20230004267A1 (en) 2023-01-05

Similar Documents

Publication Publication Date Title
WO2019119325A1 (zh) 一种控制方法及装置
US11460983B2 (en) Method of processing content and electronic device thereof
KR102378513B1 (ko) 메시지 서비스를 제공하는 전자기기 및 그 전자기기가 컨텐트 제공하는 방법
KR102447503B1 (ko) 메시지 서비스를 제공하는 전자기기 및 그 전자기기가 컨텐트 제공하는 방법
EP2981104B1 (en) Apparatus and method for providing information
WO2023016563A1 (zh) 信息提醒方法及电子设备
KR101317547B1 (ko) 이모지 캐릭터들을 이용하기 위한 휴대용 터치 스크린 장치, 방법 및 그래픽 사용자 인터페이스
US10775979B2 (en) Buddy list presentation control method and system, and computer storage medium
US20170118152A1 (en) Message providing methods and apparatuses, display control methods and apparatuses, and computer-readable mediums storing computer programs for executing methods
US20180145937A1 (en) Mobile terminal and method for controlling the same
KR101894395B1 (ko) 캡쳐 데이터 제공 방법 및 이를 위한 이동 단말기
WO2021254293A1 (zh) 一种显示通知的方法和终端
WO2018072149A1 (zh) 图片处理方法、装置、电子设备及图形用户界面
EP2487606A1 (en) Method for displaying internet page and mobile terminal using the same
CN107077292A (zh) 剪贴信息提供方法和装置
CN107302625B (zh) 管理事件的方法及其终端设备
KR20130026892A (ko) 이동 단말기 및 그것의 사용자 인터페이스 제공 방법
KR20160035564A (ko) 전자 장치 및 전자 장치의 정보 처리 방법
CN110502163A (zh) 终端设备的控制方法及终端设备
CN113127773A (zh) 页面处理方法、装置、存储介质及终端设备
CN113422863A (zh) 信息显示方法、移动终端及可读存储介质
CN109857876A (zh) 一种信息显示方法及终端设备
CN110442291A (zh) 一种控制方法及移动终端
WO2024067122A1 (zh) 一种窗口显示方法及电子设备
WO2023060897A1 (zh) 处理方法、智能设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17935396

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17935396

Country of ref document: EP

Kind code of ref document: A1