US20230317071A1 - Control display method and device - Google Patents

Control display method and device Download PDF

Info

Publication number
US20230317071A1
US20230317071A1 US18/006,703 US202118006703A US2023317071A1 US 20230317071 A1 US20230317071 A1 US 20230317071A1 US 202118006703 A US202118006703 A US 202118006703A US 2023317071 A1 US2023317071 A1 US 2023317071A1
Authority
US
United States
Prior art keywords
control
interface
application
voice instruction
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US18/006,703
Other languages
English (en)
Inventor
Hao Chen
Zhang Gao
Xiaoxiao CHEN
Shiyi Xiong
Zhihua Yin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of US20230317071A1 publication Critical patent/US20230317071A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • Embodiments of this application relate to the field of voice control technologies, and in particular, to a control display method and a device.
  • Tencent Video supports a voice service response of “next episode”.
  • the TV When the TV is woken up and receives a voice instruction for playing a next episode that is delivered by the user, the TV may identify and automatically perform a system event for playing a next episode, and feed back a response to the user.
  • a same voice instruction of “next episode” may not be executed in another application interface. For example, a control for playing a next episode is not available in the another application interface, and therefore the user cannot obtain a feedback response, resulting in a decrease in user satisfaction.
  • This application provides a control display method and a device, so that a same control can be displayed in different application interfaces, to improve user satisfaction. Specifically, the following technical solutions are disclosed:
  • this application provides a control display method, and the method may be applied to an electronic device.
  • the electronic device includes a display, and a first interface of a first application is displayed on the display, where the first interface includes a first control.
  • the method includes: receiving a wake-up word input by a user; displaying a second interface of the first application in response to the received wake-up word, where the second interface includes the first control and a second control; receiving a switching operation of the user; displaying a first interface of a second application on the display, where the first interface of the second application includes the first control; receiving the wake-up word input again by the user; and displaying a second interface of the second application in response to the received wake-up word, where the second interface of the second application includes the first control and a third control.
  • the electronic device when receiving the wake-up word input by the user, automatically adds and displays, in a current interface of the first application, the second control that is not included in the current interface. This implements automatic addition and display of the second control associated with the first application.
  • the third control is automatically added and displayed in a current interface of the second application, so that the first control in the original first application and the third control in the second application can be displayed on the display of the electronic device when the user switches applications, and all controls associated with different applications are ensured to be displayed on the display of the electronic device. This further improves a voice service function of the electronic device and user satisfaction.
  • the method before the displaying a second interface of the first application in response to the received wake-up word, the method further includes: obtaining a first component set based on a first interface type of the first application, where the first component set includes the second control.
  • the first component set further includes the third control, a fourth control, and the like.
  • a correspondence between an interface type of each application and a component set is established, where the component set includes at least one voice control, so that a voice control is automatically added based on an interface type of a current interface.
  • the component set is also referred to as a “virtual component set”.
  • a component set displayed in the first interface of the first application is referred to as a first virtual component set
  • a component set included in the first interface type of the first application is referred to as a second virtual component set.
  • the method before the displaying a second interface of the second application in response to the received wake-up word, the method further includes: obtaining a second component set based on a first interface type of the second application, where the second component set includes the third control.
  • the second component set includes the third control, so that the third control is automatically added based on the current interface type.
  • the second interface of the first application further includes prompt information corresponding to the second control.
  • the prompt information may be: next episode, previous episode, play/pause, episodes, or the like.
  • the method further includes: displaying a third interface of the first application in response to a first voice instruction, where the third interface includes a service response that is output after an operation corresponding to the first voice instruction is performed.
  • a control in the first application when the user delivers the first voice instruction, a control in the first application outputs a service response after being enabled and executing the first voice instruction, and displays the service response in the third interface, to provide a corresponding voice service for the user.
  • the displaying a third interface of the first application in response to a first voice instruction includes: enabling the second control, performing the operation corresponding to the first voice instruction, and displaying the service response in the third interface of the first application.
  • the electronic device receives the service response sent by a server, and displays the service response in the third interface of the first application.
  • a function of the second control may be implemented by invoking the server, so that a service capability of the electronic device is improved, functions of all voice controls displayed in the current interface are provided for the user, and user satisfaction is improved.
  • the displaying a second interface of the first application includes: displaying a control icon of the second control in the second interface of the first application; or displaying a control icon of the second interface and the prompt information of the second control in the second interface of the first application.
  • both the control icon of the second control and the prompt information of the second control are added and displayed in the current interface, so that the user can easily deliver a voice instruction based on the prompt information. This improves voice interaction efficiency.
  • the second interface of the first application further includes a control icon of the fourth control.
  • the fourth control is used to perform an operation corresponding to a second voice instruction.
  • the control icon of the second control is in a first color
  • the control icon of the fourth control is in a second color
  • the first color is different from the second color.
  • the electronic device enables the second control and performs the operation corresponding to the first voice instruction.
  • the electronic device sends an indication signal to the server.
  • the indication signal is used to indicate the server to perform the operation corresponding to the second voice instruction.
  • the electronic device differentiates, by using different colors, a control that is supported by the electronic device to provide a voice service and a control that is not supported by the electronic device to provide a voice service. For example, an icon of the control supported by the electronic device is displayed in the first color and an icon of the control not supported by the electronic device is displayed in the second color, so that the user can make easy identification and differentiation.
  • the second voice instruction for which the electronic device does not support providing a service response may be implemented with help of the server or another device, and then transmitted to the electronic device, so that a service capability of the electronic device is improved and a user requirement is met.
  • differentiation may alternatively be performed in another manner, for example, a mark or a pattern. This is not limited in this application.
  • this application further provides a control display method, applied to an electronic device.
  • the electronic device includes a display.
  • the method includes: receiving a wake-up word input by a user; displaying a first interface of a first application on the display in response to the received wake-up word, where the first interface includes a first control; receiving a first voice instruction input by the user; and displaying a second interface of the first application in response to the received first voice instruction, where the second interface includes the first control and a second control, and the second control is used to perform an operation corresponding to the first voice instruction.
  • the electronic device may display, in a current interface of the electronic device, a control corresponding to any voice instruction delivered by the user, to provide a corresponding service when the user delivers the voice instruction again.
  • a control corresponding to any voice instruction delivered by the user to provide a corresponding service when the user delivers the voice instruction again.
  • the method before the displaying a second interface of the first application in response to the received first voice instruction, the method further includes: obtaining text content corresponding to the first voice instruction, where the text content corresponds to the second control; and obtaining the second control when the first interface of the first application does not include the second control.
  • the obtaining the second control includes: obtaining the second control based on an SDK table, where the SDK table includes the text content and the second control.
  • the SDK table is used to expand a voice control function of the electronic device, so that automatic addition and display of the second control are implemented.
  • the SDK table further includes: the first control, text content corresponding to the first control, a third control, text content corresponding to the third control, and the like.
  • the method further includes: receiving again the first voice instruction input by the user; and displaying a third interface of the first application in response to the first voice instruction, where the third interface includes a service response that is output after the operation corresponding to the first voice instruction is performed.
  • the displaying a third interface of the first application in response to the first voice instruction includes: enabling the second control, performing the operation corresponding to the first voice instruction, and displaying the service response in the third interface of the first application.
  • the electronic device receives the service response sent by a server, and displays the service response in the third interface of the first application.
  • a function of the second control may be implemented by invoking the server, so that a voice service capability of the electronic device and user satisfaction are improved.
  • the displaying a second interface of the first application includes: displaying a control icon of the second control in the second interface of the first application; or displaying a control icon of the second interface and prompt information of the second control in the second interface of the first application.
  • the second interface of the first application further includes a control icon of the third control.
  • the third control is used to perform an operation corresponding to a second voice instruction.
  • the control icon of the second control is in a first color
  • the control icon of the third control is in a second color
  • the first color is different from the second color.
  • the electronic device enables the second control and performs the operation corresponding to the first voice instruction.
  • the electronic device sends an indication signal to the server.
  • the indication signal is used to indicate the server to perform the operation corresponding to the second voice instruction.
  • this application provides a control display apparatus, including a display, where a first interface of a first application is displayed on the display, and the first interface includes a first control.
  • the apparatus further includes a receiving module and a processing module.
  • the receiving module is configured to receive a wake-up word input by a user.
  • the processing module is configured to: in response to the received wake-up word, indicate the display to display a second interface of the first application, where the second interface includes the first control and the second control; receive a switching operation of the user; and indicate the display to display a first interface of a second application, where the first interface of the second application includes the first control.
  • the receiving module is further configured to receive the wake-up word input again by the user.
  • the processing module is further configured to indicate, in response to the received wake-up word, the display to display a second interface of the second application, where the second interface of the second application includes the first control and a third control.
  • the processing module is further configured to obtain a first component set based on a first interface type of the first application before the second interface of the first application is displayed, where the first component set includes the second control.
  • the processing module is further configured to obtain a second component set based on a first interface type of the second application before the second interface of the second application is displayed, where the second component set includes the third control.
  • the second interface of the first application further includes prompt information corresponding to the second control.
  • the processing module is further configured to display a third interface of the first application in response to a first voice instruction, where the third interface includes a service response that is output after an operation corresponding to the first voice instruction is performed.
  • the processing module is further configured to: enable the second control, perform the operation corresponding to the first voice instruction, and indicate to display the service response in the third interface of the first application; or receive, by using a communications module, the service response sent by a server, and indicate to display the service response in the third interface of the first application.
  • the processing module is further configured to: indicate to display a control icon of the second control in the second interface of the first application; or indicate to display a control icon of the second interface and the prompt information of the second control in the second interface of the first application.
  • the second interface of the first application further includes a control icon of a fourth control.
  • the fourth control is used to perform an operation corresponding to a second voice instruction.
  • the control icon of the second control is in a first color
  • the control icon of the fourth control is in a second color
  • the first color is different from the second color.
  • the processing module is further configured to: in response to the first voice instruction, enable the second control and perform the operation corresponding to the first voice instruction; and in response to the second voice instruction, send an indication signal to the server by using the communications module.
  • the indication signal is used to indicate the server to perform the operation corresponding to the second voice instruction.
  • this application further provides a control display apparatus.
  • the apparatus includes a display.
  • the apparatus further includes a receiving module and a processing module.
  • the receiving module is configured to receive a wake-up word input by a user.
  • the processing module is configured to indicate, in response to the received wake-up word, the display to display a first interface of a first application, where the first interface includes a first control.
  • the receiving module is further configured to receive a first voice instruction input by the user.
  • the processing module is further configured to indicate, in response to the received first voice instruction, the display to display a second interface of the first application, where the second interface includes the first control and a second control, and the second control is used to perform an operation corresponding to the first voice instruction.
  • the processing module is further configured to: before displaying the second interface of the first application, obtain text content corresponding to the first voice instruction, where the text content corresponds to the second control; and obtain the second control when the first interface of the first application does not include the second control.
  • the processing module is further configured to obtain the second control based on an SDK table, where the SDK table includes the text content and the second control.
  • the receiving module is further configured to receive again the first voice instruction input by the user.
  • the processing module is further configured to indicate, in response to the first voice instruction, the display to display a third interface of the first application, where the third interface includes a service response that is output after the operation corresponding to the first voice instruction is performed.
  • the processing module is further configured to: enable the second control, perform the operation corresponding to the first voice instruction, and indicate the display to display the service response in the third interface of the first application; or receive, by using a communications module, the service response sent by a server, and indicate the display to display the service response in the third interface of the first application.
  • the processing module is further configured to: indicate the display to display a control icon of the second control in the second interface of the first application; or indicate the display to display a control icon of the second interface and prompt information of the second control in the second interface of the first application.
  • the second interface of the first application further includes a control icon of a third control.
  • the third control is used to perform an operation corresponding to a second voice instruction.
  • the control icon of the second control is in a first color
  • the control icon of the third control is in a second color
  • the first color is different from the second color.
  • the processing module is further configured to: in response to the first voice instruction, enable the second control and perform the operation corresponding to the first voice instruction; and in response to the second voice instruction, send an indication signal to the server.
  • the indication signal is used to indicate the server to perform the operation corresponding to the second voice instruction.
  • this application further provides an electronic device.
  • the electronic device includes a processor and a memory, and the processor is coupled to the memory.
  • the electronic device may further include a transceiver and the like.
  • the memory is configured to store computer program instructions.
  • the processor is configured to execute the program instructions stored in the memory, to enable the electronic device to perform the method in the implementations of the first aspect or the second aspect.
  • the transceiver is configured to implement a data transmission function.
  • the electronic device further includes an audio module, a loudspeaker, a receiver, a microphone, and the like. Specifically, after receiving a wake-up word input by a user, the microphone of the electronic device transmits the wake-up word to the audio module.
  • the processor processes the wake-up word parsed by the audio module, and indicates, in response to the received wake-up word, a display to display a second interface of a first application, where the second interface includes a first control and a second control.
  • the processor is further configured to: receive a switching operation of the user, and indicate the display to display a first interface of a second application, where the first interface of the second application includes the first control.
  • the processor is configured to indicate, in response to the received wake-up word, the display to display a second interface of the second application, where the second interface of the second application includes the first control and a third control.
  • the microphone of the electronic device receives the wake-up word input by the user.
  • the processor indicates, in response to the received wake-up word, the display to display a first interface of the first application, where the first interface includes the first control.
  • the microphone of the electronic device further receives a first voice instruction input by the user.
  • the processor indicates, in response to the received first voice instruction, the display to display the second interface of the first application, where the second interface includes the first control and the second control, and the second control is used to perform an operation corresponding to the first voice instruction.
  • this application further provides a computer-readable storage medium.
  • the storage medium stores instructions. When the instructions are run on a computer or a processor, the instructions are used to perform the method according to the first aspect and the implementations of the first aspect, or used to perform the method according to the second aspect and the implementations of the second aspect.
  • this application further provides a computer program product.
  • the computer program product includes computer instructions. When the instructions are executed by a computer or a processor, the method according to the implementations of the first aspect or the second aspect can be implemented.
  • beneficial effects of the technical solutions in the implementations of the third aspect to the sixth aspect are the same as beneficial effects of the implementations of the first aspect and the second aspect.
  • beneficial effects of the technical solutions in the implementations of the third aspect to the sixth aspect are the same as beneficial effects of the implementations of the first aspect and the second aspect.
  • beneficial effects in the implementations of the first aspect and the second aspect are not described again.
  • FIG. 1 is a schematic diagram of an architecture of an applied intelligent device system according to an embodiment of this application
  • FIG. 2 is a flowchart of a control display method according to an embodiment of this application.
  • FIG. 3 is a schematic diagram of displaying a control in a first interface of a first application according to an embodiment of this application;
  • FIG. 4 A is a schematic diagram of displaying a second control in a second interface of a first application according to an embodiment of this application;
  • FIG. 4 B is a schematic diagram of displaying prompt information in a second interface of a first application according to an embodiment of this application;
  • FIG. 5 is a schematic diagram of jumping to a global response based on a voice instruction according to an embodiment of this application;
  • FIG. 6 is a flowchart of another control display method according to an embodiment of this application.
  • FIG. 7 A is a schematic diagram of displaying a second control in a second interface of a first application according to an embodiment of this application;
  • FIG. 7 B is a schematic diagram of displaying a third control in a second interface of a second application according to an embodiment of this application;
  • FIG. 8 is a schematic diagram of a distributed interface supporting all voice instructions according to an embodiment of this application.
  • FIG. 9 is a schematic diagram of a structure of a control display apparatus according to an embodiment of this application.
  • FIG. 10 is a schematic diagram of a structure of an electronic device according to an embodiment of this application.
  • FIG. 1 is a schematic diagram of an architecture of an applied intelligent device system according to an embodiment of this application.
  • the system may include at least one electronic device.
  • the electronic device includes but is not limited to: a mobile phone, a tablet computer (Pad), a personal computer, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wearable device, a television (TV), an in-vehicle terminal device, and the like.
  • the system shown in FIG. 1 includes a device 101 , a device 102 , and a device 103 , where the device 101 is a mobile phone, the device 102 is a tablet computer, and the device 103 is a TV
  • the system may alternatively include more or fewer devices.
  • the system further includes a cloud server 104 .
  • the cloud server 104 is separately connected to the device 101 , the device 102 , and the device 103 in a wireless manner, so as to implement interconnection between the device 101 , the device 102 , and the device 103 .
  • Each electronic device includes an input/output apparatus, which may be configured to: receive an operation instruction input by a user by performing an operation, and display information to the user.
  • the input/output apparatus may be a plurality of independent apparatuses.
  • the input apparatus may be a keyboard, a mouse, a microphone, or the like; and the output apparatus may be a display or the like.
  • the input/output apparatus may be integrated into one device, for example, a touch display.
  • the input/output apparatus may display a user interface (UI), to interact with the user.
  • UI user interface
  • the UI is a medium interface for interaction and information exchange between an application or an operating system and the user, and is used to implement conversion between an internal form of information and a form acceptable to the user.
  • a user interface of an application is source code written in a specific computer language such as Java or an extensible markup language (XML).
  • Interface source code is parsed and rendered on an electronic device, and is finally presented as user-recognizable content, for example, a control such as a picture, a text, or a button.
  • the control also referred to as a widget, is a basic element on the user interface.
  • Typical controls include a toolbar, a menu bar, a text box, a button, a scrollbar, a picture, and a text.
  • the control may have its own attribute and content.
  • the attribute and content of the control in the user interface may be defined by using a tag or a node.
  • a control included in an interface is defined in the XML by using nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>.
  • One node corresponds to one control or one attribute in the user interface. After being parsed and rendered, the node is presented as user-visible content.
  • a user interface of a hybrid application usually further includes a web page.
  • the web page also referred to as a page, may be understood as a special control embedded in a user interface of an application.
  • the web page is source code written in a specific computer language, for example, a hypertext markup language (HTML), cascading style sheets (CSS), or JavaScript (JS).
  • the web page source code may be loaded and displayed as user-recognizable content by a browser or a web page display component with a function similar to a function of the browser.
  • Specific content included in the web page is also defined by using a tag or a node in the web page source code.
  • an element and an attribute of the web page are defined in the HTML by using ⁇ p>, ⁇ img>, ⁇ video>, or ⁇ canvas>.
  • the user interface is usually in a representation form of a graphical user interface (GUI).
  • GUI graphical user interface
  • the GUI is a user interface that is related to an operation of the electronic device and that is displayed in a graphical manner.
  • the graphical user interface may be an interface element such as a window or a control displayed on a display of an electronic device.
  • a display form of the control includes various visual interface elements such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, and a navigation bar.
  • an integrated development environment is used to develop and generate a control, where the IDE integrates a plurality of functions such as editing, designing, and debugging in a public environment, so as to provide strong support for developers to develop applications quickly and conveniently.
  • the IDE includes a menu, a toolbar, and some windows.
  • the toolbar may be used to add a control to a form.
  • the form is a small screen area, usually a rectangle, which may be used to display information to the user and receive input information from the user.
  • This embodiment provides a control display method, to provide a common voice interaction capability for a user by adding a virtual voice control to a display, so as to improve user satisfaction.
  • the method may be applied to any one of the foregoing electronic devices. Specifically, as shown in FIG. 2 , the method includes the following steps.
  • the first interface may be a current interface.
  • the electronic device when obtaining the wake-up word input by the user, the electronic device automatically enters an instruction input state, and waits for the user to deliver a voice instruction.
  • the wake-up word may be a predefined wake-up word, for example, Xiaoyi Xiaoyi or Xiaoai, or further includes a generalized wake-up word.
  • a camera of the electronic device collects that attention of the user is concentrated on the current display, or when a voice instruction that meets a preset voice instruction set is detected during voice interaction between the user and the electronic device, the electronic device may be woken up, and enter a voice instruction input state.
  • the at least one control includes a first control.
  • the first control is a control displayed in the current interface when the electronic device is woken up.
  • the first control displayed in the current interface includes any one of the following: play/pause “ /
  • a display form of the first control may be an icon, a button, a menu, a tab, a text box, or the like.
  • the first application is a video playing application, for example, Huawei Video or Tencent Video
  • the first interface is a video playing interface.
  • the first interface is a text browsing interface.
  • the first control is a common control or a control commonly used in various applications.
  • the common control may be a “play/pause” control, or the common control is any control in a virtual component set.
  • the first voice instruction indicates a service that the user expects the current interface to respond to. For example, when the current interface is the video playing interface, the first voice instruction is “play at a 2x speed”, or when the current interface is the text browsing interface, the first voice instruction is “amplify”.
  • the first voice instruction may alternatively be “present a control icon” or the like.
  • the text content corresponds to the second control, and the second control is used to perform an operation corresponding to the first voice instruction.
  • all controls in the first interface of the first application are traversed, and it is determined whether the first interface has the second control that executes the first voice instruction, for example, it is determined whether the first control in the first interface can perform an operation of “play at a 2x speed”.
  • control icons of all controls included in the first interface are as follows: play/pause “ /
  • a control that can perform the operation of “play at a 2x speed” is searched for.
  • the first interface of the first application includes the second control, enable the second control and perform the operation corresponding to the first voice instruction, to provide a service for the user.
  • the method further includes: feeding back a corresponding service response to the user.
  • the electronic device finds that a control in the first interface can provide a function of “play at a 2x speed”, the electronic device correspondingly enables the control, performs an operation of “play at a 2x speed”, and displays a service response in the current interface.
  • first interface of the first application does not include the second control, obtain the second control, and display the second control in a second interface of the first application.
  • an implementation of determining the second control is searching, based on a software development kit (SDK) table, for the second control corresponding to the first voice instruction.
  • SDK is a set of development tools used to create application software for a specific software package, software framework, hardware platform, operating system, and the like.
  • the SDK is an SDK used to develop an application program on a Windows platform.
  • the SDK not only can provide a demanding file of application program interface (API) for a program design language, but also can communicate with a specific embedded system.
  • the SDK table includes a correspondence between text content of at least one voice instruction and at least one control.
  • the control may be represented by using a control icon.
  • an SDK table may include but is not limited to the following correspondences: play/pause, next episode, turn on/off bullet chatting, send bullet chats, speed, and exit.
  • the SDK table may be pre-stored in the electronic device, or obtained by the electronic device from a cloud server.
  • the SDK table may be updated in real time, and periodically obtained by the electronic device, to provide rich voice service functions for the user.
  • Another implementation of determining the second control includes:
  • the first virtual component set includes one or more controls displayed in the first interface of the first application when the electronic device receives the wake-up word and enters the instruction input state.
  • the second virtual component set includes at least one preset control. A quantity of all controls included in the second virtual component set is greater than or equal to a quantity of controls in the first virtual component set.
  • the second virtual component set is associated with a first interface type of the first application.
  • the first interface type of the first interface includes video playing, music playing, picture/photo preview, text browsing, and the like.
  • the second virtual component set includes the second control.
  • the second control may be a common control. For example, when the first voice instruction delivered by the user in step 102 is “play a next episode”, and a “next episode” control is a voice control in a virtual component set of an interface type of a video playing interface, it is determined that the second virtual component set is a virtual component set corresponding to the video playing interface.
  • 105-2 Determine the second control, where the second control belongs to the second virtual component set but does not belong to the first virtual component set. There may be one or more second controls.
  • the first virtual component set includes only one “play/pause” control
  • the second virtual component set includes six controls: play/pause, next episode, turn on/off bullet chatting, send bullet chats, speed, and exit. Therefore, it is determined that the second control has all controls except the “play/pause” control.
  • the second control includes: next episode, turn on/off bullet chatting, send bullet chats, speed, and exit.
  • step 105 after determining the second control from the SDK table, the electronic device adds a control icon corresponding to the second control to the second interface of the first application. Similarly, if the electronic device determines a plurality of second controls based on the virtual component set, all the second controls are displayed in the second interface.
  • a control icon 36 corresponding to the “next episode” control is displayed in a current video playing interface (that is, the second interface).
  • the second interface further includes the first control.
  • a control icon corresponding to the first control includes /
  • the method further includes: adding prompt information corresponding to the second control to the second interface.
  • Each control corresponds to one piece of prompt information, and each piece of prompt information gives a prompt of a voice function corresponding to the control.
  • the user may deliver a corresponding voice instruction based on the prompt information.
  • the electronic device When receiving a voice instruction including the prompt information, the electronic device enables, based on the correspondence, a control corresponding to the voice instruction. For example, as shown in FIG. 4 B , when the first voice instruction input by the user is “play a next episode”, it is queried that the first interface shown in FIG.
  • the correspondence between the prompt information and the control may be stored in the foregoing SDK table, or stored separately.
  • the prompt information may be the same as or different from the text content.
  • the first voice instruction delivered again by the user may include more voice content in addition to the prompt information. This is not limited in this embodiment.
  • control icon corresponding to the second control and the prompt information may be displayed together in a blank area of the current interface, or may be added to the current interface in a form of a floating window.
  • a specific adding manner is not limited in this embodiment.
  • the electronic device may display, in the current interface of the electronic device, a control corresponding to any voice instruction delivered by the user, to provide a corresponding service when the user delivers the voice instruction again.
  • a control corresponding to any voice instruction delivered by the user avoids a disadvantage that the voice instruction delivered by the user cannot be executed in the current interface of the electronic device because controls in different application interfaces are different.
  • the SDK table or the virtual component set is used to expand a voice control function of the electronic device, so that automatic addition and display of the second control are implemented, and a service function of voice text content and user satisfaction are improved.
  • the method further includes:
  • the second control is displayed in the second interface of the first application based on the SDK table, the second control is directly enabled, the operation corresponding to the text content of the first voice instruction is performed, and a service response is output.
  • the electronic device obtains again the first voice instruction delivered by the user, where the text content corresponding to the first voice instruction may include the prompt information corresponding to the second control or may be the same as the prompt information corresponding to the second control, the second control is enabled, the operation corresponding to the first voice instruction is performed, and a service response is output.
  • the voice instruction of “play a next episode” (or “next episode”) delivered again by the user is received, and text content obtained by parsing the voice instruction includes “next episode”, the “next episode” control is enabled, and an operation of playing a next episode is performed, to provide a voice service for the user.
  • the process of enabling the second control and outputting the service response specifically includes:
  • 106-1 Detect whether the second control in the second interface of the electronic device can perform the operation corresponding to the first voice instruction, that is, determine whether the second control can provide a function service for the first voice instruction.
  • the electronic device may obtain a service response by using the cloud server or another electronic device.
  • the service response is generated after the cloud server or the another electronic device performs the operation corresponding to the first voice instruction, transmitted to the electronic device, and displayed by the electronic device on the display after the electronic device receives the service response.
  • the electronic device After the electronic device receives a second voice instruction “zoom in on a picture” delivered by the user, if the second control in the second interface of the electronic device does not support a function of “zoom in on a picture”, the electronic device sends an original picture to the cloud server or a second electronic device, and the cloud server or the second electronic device performs zoom-in processing on the original picture.
  • the cloud server may further send the original picture to another electronic device having the function of “zoom in on a picture”, obtain a zoomed-in picture, and finally send the zoomed-in picture to the electronic device.
  • step 106-3 If the second control in the second interface of the electronic device can perform the operation corresponding to the first voice instruction, that is, the second control can provide the function service, enable the second control, perform the operation corresponding to the first voice instruction, and output a service response.
  • the second control in the second interface of the electronic device can perform the operation corresponding to the first voice instruction, that is, the second control can provide the function service, enable the second control, perform the operation corresponding to the first voice instruction, and output a service response.
  • a specific process is the same as that in step 104 , and details are not described again.
  • the method further includes: When the electronic device receives a second voice instruction delivered by the user, and obtains text content corresponding to the second voice instruction, where the text content corresponding to the second voice instruction is different from the text content of the first voice instruction in step 102 , but the text content corresponding to the second voice instruction is the same as text content of a second control already added in step 105 , the added second control is enabled, an operation corresponding to the text content of the second voice instruction is performed, and a corresponding service response is output.
  • the second voice instruction is “turn off bullet chatting”, and the second voice instruction of “turn off bullet chatting” is different from the first voice instruction of “play a next episode”.
  • the second control is enabled, an operation of “turn off bullet chatting” is performed, and an operation result is displayed to the user in the current interface.
  • a control for performing the text content of the second voice instruction may be one of the plurality of second controls determined based on the virtual component set, or may be one of a plurality of controls originally included in the electronic device. This is not limited in this embodiment.
  • the method further includes: The electronic device displays, in a differentiated manner, a control that is supported by the electronic device to provide a voice service and a control that is not supported by the electronic device to provide a voice service.
  • the electronic device displays, in the second interface, the control supported by the electronic device in a first color (for example, green) and the control not supported by the electronic device in a second color (for example, red), so that the user can make easy identification and differentiation.
  • another manner for differentiation may alternatively be used, for example, marking.
  • a specific manner for differentiation is not limited in this embodiment. As described above, for the control not supported by the electronic device, a service response needs to be obtained for a voice instruction corresponding to the control by using the cloud server or the another electronic device.
  • functions of all the second controls may be implemented by invoking the cloud server, so that a service capability of the electronic device is improved, functions of all voice controls displayed in the current interface are provided for the user, and user satisfaction is improved.
  • the displayed service response corresponding to the second control may include an interface response and a global response.
  • the service response output by the electronic device includes an interface response and a global response.
  • the interface response means that the electronic device does not need to jump from the current first application to a second application when the electronic device performs an operation.
  • the operation can be completed in an interface of the current first application, for example, the foregoing operations such as “play a next episode”, “turn off bullet chatting”, and “zoom in on a picture”.
  • the global response means that the electronic device needs, when performing an operation, to jump from the current first application to a second application, and provide a service response in an interface of the second application.
  • a possible implementation includes:
  • the interface of the first application is a picture preview interface.
  • a control that needs to be added is a “play music” control
  • a control icon “ ⁇ ” of the “play music” control is added to the picture preview interface, and then an application interface corresponding to “play music” is jumped to, for example, the second application.
  • the interface of the second application is a music playing interface.
  • the “play music” control is enabled directly when the voice instruction input by the user is received again, and an operation corresponding to the music playing instruction is performed, to provide a music playing function for the user.
  • the voice instruction of “play music” is a switching instruction. After receiving the switching instruction, the electronic device performs an interface switching operation.
  • the interface of the first application or the interface of the second application includes interfaces such as video playing, music playing, picture/photo preview, text browsing, dialing, and message sending.
  • a voice instruction delivered by the user may be referred to as an interface voice; and for the global response, a voice instruction delivered by the user may be referred to as a global voice.
  • the “play music” control may be displayed in the picture preview application interface in a form of a floating window, and controls such as a music list, a song name, and play/pause may be displayed in the floating window.
  • a program list may be further displayed in the list, for example, a list of live programs on all TV channels.
  • An embodiment further provides another control display method.
  • a difference from Embodiment 1 lies in that, before a user delivers a first voice instruction, in this embodiment, a second control is already determined, and the second control is displayed in an application interface of an electronic device, to provide rich service responses for the user.
  • a first interface of a first application is displayed on a display of the electronic device.
  • the first interface includes a first control.
  • the method includes the following steps.
  • An electronic device receives a wake-up word input by a user.
  • the step includes:
  • the first virtual component set is associated with a first interface of the first application, and includes one or more controls displayed in the first interface when the electronic device is woken up.
  • controls displayed in the first interface include: exit “ ⁇ ” 71 , download 72 , message bar 73 , contents 74 , eye comfort brightness 75 , text-to-speech 76 , and reading settings “Aa” 77 .
  • a set including these controls is the first virtual component set.
  • the second virtual component set is associated with a first interface type of the first application.
  • the first interface type is text browsing.
  • a virtual component set corresponding to the text browsing includes at least one common control.
  • the common controls may include all controls in the first virtual component set, and a quantity of all controls included in the second virtual component set is greater than or equal to a quantity of controls in the first virtual component set.
  • the common control may be created and added by using an SDK.
  • a method for obtaining the second virtual voice component set is implemented by the electronic device based on the first interface type of the first application.
  • a correspondence exists between an interface type of each application and a virtual component set.
  • the electronic device may determine, based on the correspondence, a virtual component set corresponding to an interface type of a current application, that is, the second virtual component set.
  • each interface type corresponds to one virtual component set.
  • the text browsing interface corresponds to a “virtual component set 4”, and when it is determined that the virtual component set 4 is the second virtual control set, the second virtual control set includes all controls in the virtual component set 4.
  • the foregoing correspondences may be combined with the SDK table in Embodiment 1 to form a new relationship table.
  • the new relationship table includes content such as the interface type, the virtual component set, the control icon included in each virtual component set, and the prompt information corresponding to each control.
  • a control icon and prompt information that correspond to each second control may be displayed together in a blank area of the second interface, or displayed in a form of a floating window.
  • the blank area may be understood as an area that is not covered by a control.
  • a size of an existing control icon in the interface may be reduced or moved to free up a blank area, and then the control icon and the prompt information are displayed in the blank area.
  • a position and a manner of displaying the second control are not limited in this embodiment.
  • the ebook application displays the first interface
  • the electronic device obtains the first virtual component set based on the first interface of the first application.
  • the first virtual component set includes the following controls: exit “ ⁇ ” 71 , download 72 , message bar 73 , contents 74 , eye comfort brightness 75 , text-to-speech 76 , and reading settings “Aa” 77 .
  • the electronic device determines that an interface type of the first interface is a type of a “text browsing” interface, obtains the second virtual component set based on the “text browsing” interface, where the second virtual component set corresponds to the “virtual component set 4” in Table 2 , and searches for the “virtual component set 4” in Table 2 to obtain the following controls: exit “ ⁇ ” 71 , download 72 , message bar 73 , contents 74 , eye comfort brightness 75 , text-to-speech 76 , reading settings “Aa” 77 , previous chapter “Previous chapter” 78 , and next chapter “Next chapter” 79 .
  • control icons “Previous chapter” 78 and “Next chapter” 79 corresponding to the second controls are added to the second interface of the first application.
  • the method further includes: enabling the second control, performing an operation corresponding to the second control, and outputting a service response.
  • a specific execution process is the same as that in step 106 in Embodiment 1.
  • the switching operation corresponds to a global response.
  • the switching operation may be manual switching performed by the user, or the switching operation is started based on a voice instruction input by the user. For example, when the user delivers a voice instruction of “read news”, the electronic device receives and parses the voice instruction, and performs an interface switching operation.
  • the ebook application (APP) is switched to the second application.
  • the second application is a video playing application, and an interface of the video playing application includes the first control.
  • the first interface of the second application includes the following first controls: play/pause 31 , turn on/off bullet chatting 32 , send bullet chats 33 , speed “Speed” 34 , and exit “ ⁇ ” 35 .
  • the exit control “ ⁇ ” 35 is the same as the control “ ⁇ ” 71 in the first interface of the first application.
  • a component set corresponding to a first interface type of the second application is a third virtual component set.
  • the electronic device determines that an interface type corresponding to the current video playing application is a type of a “video playing” interface, and searches, based on Table 2, for a “virtual component set 1” corresponding to the “video playing” interface.
  • the virtual component set 1 includes the following controls: play/pause, next episode, turn-on/off bullet chatting, send bullet chats, speed, and exit.
  • the third control is determined as “next episode”, and a control icon 36 of “next episode” is added to the second interface of the second application.
  • a specific adding process is the same as that in Embodiment 1, and details are not described again in this embodiment.
  • the method further includes: enabling the third control, performing an operation corresponding to the third control, and displaying an output service response in the interface of the second application.
  • the “next episode” control 36 is enabled, a voice instruction operation of “play a next episode” is performed, and then video content of the next episode is displayed in the interface of the second application.
  • the electronic device when the electronic device receives another voice instruction delivered by the user, and the voice instruction corresponds to a fourth control in the second interface of the second application, the electronic device enables the fourth control, performs an operation corresponding to the current voice instruction delivered by the user, and displays a response result in the current interface of the second application.
  • the fourth control may be any one of play/pause 31 , turn on/off bullet chatting 32 , send bullet chats 33 , speed “Speed” 34 , and exit “ ⁇ ” 35 .
  • the electronic device may further differentiate, by using different colors or marks, a control that is supported by the electronic device to provide a voice service and a control that is not supported by the electronic device to provide a voice service.
  • a control function may be implemented by using a cloud server, to provide rich voice service functions for the user.
  • a virtual component set corresponding to each interface type is set, and the virtual component set is compared with controls included in a current interface to determine controls that are not included in the current interface but are commonly used, and these controls are automatically added to the current interface of the electronic device.
  • the electronic device automatically adds and displays, in a current interface of the first application, the second control that is not included in the current interface, to implement automatic addition and display of the second control associated with the first application. This ensures that a same voice control is displayed in a same application. For example, according to this method, voice controls of “Previous chapter” and “Next chapter” are displayed on interfaces of different ebook applications, so that the user can make easy voice interaction and user experience is improved.
  • the third control is automatically added and displayed in a current interface of the second application, so that all controls corresponding to an interface type of the current application can be displayed on the display of the electronic device based on the interface type when the user switches applications.
  • the voice control of “next episode” that is not included in a current interface can be automatically added and displayed in the video playing interface. In this way, all voice controls associated with different applications are displayed on the display of the electronic device, so that a voice service function of the electronic device and user satisfaction are improved.
  • prompt information corresponding to the newly added control is further displayed.
  • the following prompts may be included in a search box of the “search” control:
  • Prompt 1 A text or floating annotation text is displayed in or outside the search box, for example, “Please speak out the content to be searched, for example, the 100 th element or a nice-looking pen”, and the annotations are highlighted.
  • Prompt 2 A text or floating annotation text is displayed in or outside the search box.
  • the search text can be generalized information such as “search for pictures or search for information”, or hot words such as “Roast Show” and “COVID-19 virus”.
  • the electronic device After the user speaks out voice content for searching based on the foregoing prompt, the electronic device automatically and quickly performs searching based on a preset text, searches a database for a result, and outputs a service response.
  • the method further includes: automatically creating and updating a control set, to provide rich voice service functions for different electronic devices.
  • a voice environment includes devices such as a mobile phone, a TV, and a head unit.
  • Each device includes a different voice control, and supports a different function of the voice control.
  • voice instructions for which a virtual component set of the mobile phone terminal can provide a service include ⁇ A, B, C, D ⁇ ;
  • voice instructions supported by a virtual component set of the TV include ⁇ A, B, C, D, E, F, G ⁇ ;
  • a voice instruction supported by a virtual component set of the head unit includes ⁇ F ⁇ .
  • a common virtual component set predefined by a system is further included.
  • voice instructions that can be supported by a voice control developed in an IDE environment by using an SDK include ⁇ A, B, C, D, E, F, G ⁇ , and cover all voice instructions in distributed interfaces of a plurality of devices.
  • At least one target control is added to the devices such as the mobile phone, the TV, and the head unit based on a virtual component set that is of all voice instructions and that is integrated by using the SDK, so as to ensure that each device has a capability of executing all voice instructions. This improves user experience.
  • the mobile phone after the voice instructions stored in the distributed interface are compared with the voice instructions supported by the mobile phone, it is determined to add controls corresponding to voice instructions ⁇ E, F, G ⁇ to an application interface of the mobile phone, so that the mobile phone can perform all operations of the voice instructions A to G.
  • controls corresponding to voice instructions ⁇ E, F, G ⁇ to an application interface of the mobile phone, so that the mobile phone can perform all operations of the voice instructions A to G.
  • types of voice instructions currently stored in the TV are the same as those of voice instructions in the SDK, that is, the TV can perform operations of all voice instructions, no new control needs to be added.
  • controls corresponding to voice instructions ⁇ B, C, D, E, F, G ⁇ that are not included in the head unit need to be added.
  • a specific method for adding a corresponding control is the same as the method in the foregoing embodiment. Details are not described again.
  • a new virtual component set created and developed by using the IDE includes voice controls that can execute all voice instructions in the distributed interface, and these controls are automatically added to different electronic devices, so that voice service capabilities of the electronic devices are improved.
  • each electronic device further supports invoking of a remote voice capability, for example, obtaining a service response of a target control from the cloud server, to avoid re-development of the newly added control on each electronic device. This reduces software development costs.
  • the virtual component set described in the foregoing embodiments is also referred to as a “component set”.
  • the second virtual component set may be referred to as a “first component set”
  • the third virtual component set may be referred to as a “second component set”.
  • the first interface type includes but is not limited to video playing, music playing, picture/photo preview, text browsing, and the like.
  • the first application and the second application may be applications (APPs) such as the video playing application, the voice playing application, and the picture/photo preview application.
  • APPs applications
  • FIG. 9 is a schematic diagram of a structure of a control display apparatus according to an embodiment of this application.
  • the apparatus may be an electronic device, or a component located in the electronic device, for example, a chip circuit.
  • the apparatus may implement the control adding method in the foregoing embodiment.
  • the apparatus may include a receiving module 901 and a processing module 902 .
  • the apparatus may further include other units or modules such as a communications module and a storage unit.
  • the communications module and the storage unit are not shown in FIG. 9 .
  • the apparatus further includes a display.
  • the display is configured to display at least one control.
  • the receiving module 901 is configured to receive a wake-up word input by a user.
  • the processing module 902 is configured to indicate, in response to the received wake-up word, the display to display a second interface of a first application, where the second interface includes a first control and a second control.
  • the processing module 902 is further configured to: receive a switching operation of the user, and indicate the display to display a first interface of a second application, where the first interface of the second application includes the first control.
  • the receiving module 901 is further configured to receive the wake-up word input again by the user.
  • the processing module 902 is further configured to indicate, in response to the received wake-up word, the display to display a second interface of the second application, where the second interface of the second application includes the first control and a third control.
  • the processing module 902 is further configured to: obtain a first component set based on a first interface type of the first application before the second interface of the first application is displayed.
  • the first component set includes the second control.
  • the processing module 902 is further configured to: obtain a second component set based on a first interface type of the second application before the second interface of the second application is displayed.
  • the second component set includes the third control.
  • the processing module 902 may obtain the first component set and the second component set from the storage unit.
  • the second interface of the first application further includes prompt information corresponding to the second control.
  • the processing module 902 is further configured to indicate, in response to a first voice instruction, the display to display a third interface of the first application, where the third interface includes a service response that is output after an operation corresponding to the first voice instruction is performed.
  • the processing module 902 is further configured to: enable the second control, perform the operation corresponding to the first voice instruction, and indicate to display the service response in the third interface of the first application; or receive, by using the communications module, the service response sent by a server, and indicate to display the service response in the third interface of the first application.
  • the communications module has a data receiving and sending function.
  • the processing module 902 is further configured to: indicate to display a control icon of the second control in the second interface of the first application; or indicate to display a control icon of the second interface and the prompt information of the second control in the second interface of the first application.
  • the second interface of the first application further includes a control icon of a fourth control.
  • the fourth control is used to perform an operation corresponding to a second voice instruction.
  • the control icon of the second control is in a first color
  • the control icon of the fourth control is in a second color
  • the first color is different from the second color.
  • the processing module 902 is further configured to: in response to the first voice instruction, enable the second control and perform the operation corresponding to the first voice instruction; and in response to the second voice instruction, send an indication signal to the server by using the communications module.
  • the indication signal is used to indicate the server to perform the operation corresponding to the second voice instruction.
  • the processing module 902 is further configured to indicate the display to display a first service response or a second service response.
  • the first service response is a service response that is output after the processing module 902 performs the operation corresponding to the first voice instruction.
  • the second service response is a service response received from the server, and the service response is output after the server executes the second voice instruction.
  • the receiving module 901 is configured to receive the wake-up word input by the user.
  • the processing module 902 is configured to indicate, in response to the received wake-up word, the display to display a first interface of the first application, where the first interface includes the first control.
  • the receiving module 901 is further configured to receive the first voice instruction input by the user.
  • the processing module 902 is further configured to: in response to the received first voice instruction, indicate the display to display the second interface of the first application, where the second interface includes the first control and the second control, and the second control is used to perform the operation corresponding to the first voice instruction.
  • the processing module 902 is further configured to: before indicating the display to display the second interface of the first application, obtain text content corresponding to the first voice instruction, where the text content corresponds to the second control; and when the first interface of the first application does not include the second control, obtain the second control.
  • the processing module 902 is further configured to obtain the second control based on an SDK table, where the SDK table includes the text content and the second control.
  • the receiving module 901 is further configured to receive again the first voice instruction input by the user.
  • the processing module 902 is further configured to indicate, in response to the first voice instruction, the display to display the third interface of the first application, where the third interface includes the service response that is output after the operation corresponding to the first voice instruction is performed.
  • FIG. 10 is a schematic diagram of a structure of an electronic device.
  • the device includes a processor 110 and a memory 120 .
  • the device further includes: a USB interface 130 , a power management module 140 , a battery 141 , an antenna 1 , an antenna 2 , a mobile communications module 150 , a wireless communications module 160 , an audio module 170 , a loudspeaker 170 A, a receiver 170 B, a microphone 170 C, a headset jack 170 D, a sensor module 180 , a button 191 , a camera 192 , a display 193 , and the like.
  • the structure shown in this embodiment does not constitute a specific limitation to the electronic device.
  • the electronic device may include more or fewer components than those shown in the figure, or combine some components, or split some components, or have a different component arrangement.
  • the components shown in the figure may be implemented by hardware, software, or a combination of software and hardware.
  • the processor 110 may be formed by an integrated circuit (IC), for example, may be formed by a single packaged IC, or may be formed by connecting a plurality of connected packaged ICs that have a same function or different functions.
  • the processor 110 may include a central processing unit (CPU), a digital signal processor (DSP), or the like.
  • the processor 110 may further include a hardware chip.
  • the hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (PLD), or a combination thereof.
  • the PLD may be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a generic array logic (GAL), or any combination thereof.
  • the processor 110 may include one or more interfaces.
  • the interface may include an inter-integrated circuit (I2C) interface, an inter-integrated circuit sound (I2S) interface, a pulse code modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a mobile industry processor interface (MIPI), a general-purpose input/output (GPIO) interface, a SIM interface, a universal serial bus (USB) interface, and/or the like.
  • I2C inter-integrated circuit
  • I2S inter-integrated circuit sound
  • PCM pulse code modulation
  • UART universal asynchronous receiver/transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM SIM interface
  • USB universal serial bus
  • the memory 120 is configured to store and exchange various types of data or software, including an SDK table, a first voice instruction, a second voice instruction, text content corresponding to the first voice instruction, text content corresponding to the second voice instruction, a first virtual component set, a second virtual component set, a control icon, and the like, and is further configured to store files such as audio, a video, and a picture/photo.
  • the memory 120 may store computer program instructions or code.
  • the memory 120 may include a volatile memory, for example, a random access memory (RAM), and may further include a non-volatile memory, for example, a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD).
  • RAM random access memory
  • non-volatile memory for example, a read-only memory (ROM), a flash memory, a hard disk drive (HDD), or a solid-state drive (SSD).
  • ROM read-only memory
  • HDD hard disk drive
  • SSD solid-state drive
  • the memory 120 may further include a combination of the foregoing types of memories.
  • the display 193 may be configured to display control icons and prompt information corresponding to a first control, a second control, and a third control, and display different application interfaces, for example, a first interface and a second interface of a first application, and a first interface and a second interface of a second application.
  • the display 193 may further display a picture, a photo, text information, play a media stream such as a video or audio, and the like.
  • the display 193 may include a display panel and a touch panel.
  • the display panel may be configured in a form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel is also referred to as a touchscreen, a touch-sensitive screen, or the like.
  • the electronic device 100 may include one or N displays 193 , where N is a positive integer greater than 1 .
  • the audio module 170 may implement voice interaction between a user and the electronic device.
  • the audio module 170 includes an audio circuit, which may transmit, to the loudspeaker 170 A, a signal converted from received audio data.
  • the loudspeaker 170 A converts the signal into a sound signal for outputting.
  • the microphone 170 C is configured to: receive a sound signal input by the user, for example, a wake-up word, the first voice instruction, or the second voice instruction, convert the received sound signal into an electrical signal, and then transmit the electrical signal to the audio module 170 . After receiving the electrical signal, the audio module 170 converts the electrical signal into audio data, and then outputs the audio data to the processor 110 for further processing, to obtain text content corresponding to the voice instruction.
  • the sensor module 180 may include at least one sensor, such as a pressure sensor, a gyroscope sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a touch sensor, or a fingerprint sensor.
  • a pressure sensor such as a pressure sensor, a gyroscope sensor, a barometric pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a touch sensor, or a fingerprint sensor.
  • the button 191 includes a power button, a volume button, and the like.
  • the USB interface 130 is an interface that conforms to a USB standard specification, and may be specifically a mini USB interface, a micro USB interface, a USB type-C interface, or the like.
  • the USB interface 130 may be configured to connect to a charger to charge the electronic device, may be configured to transmit data between the electronic device and a peripheral device, or may be configured to connect to a headset and play audio through the headset.
  • the interface may be further configured to connect to another electronic device such as a virtual reality device.
  • the power management module 140 is configured to connect the battery 141 to the processor 110 .
  • the power management module 140 supplies power to the processor 110 , the memory 120 , the display 193 , the camera 192 , the mobile communications module 150 , the wireless communications module 160 , and the like.
  • the power management module 140 may alternatively be disposed in the processor 110 .
  • a wireless communication function of the electronic device may be implemented through the antenna 1 , the antenna 2 , the mobile communications module 150 , the wireless communications module 160 , a modem processor, a baseband processor (or a baseband chip), and the like.
  • the antenna 1 and the antenna 2 are configured to transmit and receive an electromagnetic wave signal.
  • Each antenna of the electronic device may be configured to cover one or more communication frequency bands. Different antennas may be further multiplexed, to improve antenna utilization.
  • the mobile communications module 150 may provide a solution that includes wireless communication such as 2G/3G/4G/5G and that is applied to the electronic device.
  • the mobile communications module 150 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • at least some function modules of the mobile communications module 150 may be disposed in the processor 110 .
  • the wireless communications module 160 may provide a wireless communication solution that is applied to the electronic device, and that includes a wireless local area network (WLAN) (for example, a wireless fidelity (Wi-Fi) network), Bluetooth (BT), a global navigation satellite system (GNSS), frequency modulation (FM), a near field communication (NFC) technology, an infrared (IR) technology, and the like.
  • WLAN wireless local area network
  • WiFi wireless fidelity
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communications module 160 may be one or more components integrating at least one communications processor module.
  • the wireless communications module 160 may further receive a to-be-sent signal from the processor 110 , perform frequency modulation and amplification on the signal, and convert the signal into an electromagnetic wave for radiation through the antenna 2 .
  • the antenna 1 and the mobile communications module 150 are coupled, and the antenna 2 and the wireless communications module 160 are coupled, so that the electronic device can communicate with a network and another device by using a wireless communications technology.
  • the wireless communications technology may include a global system for mobile communications (GSM), a general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, a GNSS, a WLAN, NFC, FM, an IR technology, and/or the like.
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), and a Beidou navigation satellite system (BDS).
  • a function of the receiving module 901 may be implemented by the audio module 170 or the microphone 170 C in the audio module 170
  • a function of the processing module 902 may be implemented by components such as the processor 110 and the display 193
  • a function of the storage unit may be implemented by the memory 120 .
  • an embodiment of this application further provides a system.
  • the system includes at least one electronic device, and may further include a server, for example, a cloud server, configured to implement the control display methods in the foregoing embodiments.
  • a structure of the server may be the same as or different from a structure of the electronic device shown in FIG. 10 . This is not limited in this embodiment.
  • an embodiment of this application further provides a computer storage medium.
  • the computer storage medium may store a program. When the program is executed, some or all steps of the control adding method provided in this application may be performed.
  • the storage medium includes but is not limited to a magnetic disk, an optical disc, a ROM, a RAM, or the like.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a dedicated computer, a computer network, or other programmable apparatuses.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • a plurality of means two or more than two unless otherwise specified.
  • terms such as “first” and “second” are used in the embodiments of this application to distinguish between same items or similar items that provide basically same functions or purposes. A person skilled in the art may understand that the terms such as “first” and “second” do not limit a quantity or an execution sequence, and the terms such as “first” and “second” do not indicate a definite difference.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)
US18/006,703 2020-07-28 2021-07-15 Control display method and device Pending US20230317071A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN202010736457.4 2020-07-28
CN202010736457.4A CN114007117B (zh) 2020-07-28 2020-07-28 一种控件显示方法和设备
PCT/CN2021/106385 WO2022022289A1 (zh) 2020-07-28 2021-07-15 一种控件显示方法和设备

Publications (1)

Publication Number Publication Date
US20230317071A1 true US20230317071A1 (en) 2023-10-05

Family

ID=79920314

Family Applications (1)

Application Number Title Priority Date Filing Date
US18/006,703 Pending US20230317071A1 (en) 2020-07-28 2021-07-15 Control display method and device

Country Status (4)

Country Link
US (1) US20230317071A1 (de)
EP (1) EP4181122A4 (de)
CN (1) CN114007117B (de)
WO (1) WO2022022289A1 (de)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230152871A1 (en) * 2021-11-16 2023-05-18 Asustek Computer Inc. Electronic device and connecting device

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6615176B2 (en) * 1999-07-13 2003-09-02 International Business Machines Corporation Speech enabling labeless controls in an existing graphical user interface
US20120078635A1 (en) * 2010-09-24 2012-03-29 Apple Inc. Voice control system
US9081550B2 (en) * 2011-02-18 2015-07-14 Nuance Communications, Inc. Adding speech capabilities to existing computer applications with complex graphical user interfaces
CN103869931B (zh) * 2012-12-10 2017-02-08 三星电子(中国)研发中心 语音控制用户界面的方法及装置
CN103200329A (zh) * 2013-04-10 2013-07-10 威盛电子股份有限公司 语音操控方法、移动终端装置及语音操控系统
JP5955299B2 (ja) * 2013-11-08 2016-07-20 株式会社ソニー・インタラクティブエンタテインメント 表示制御装置、表示制御方法、プログラム及び情報記憶媒体
CN104184890A (zh) * 2014-08-11 2014-12-03 联想(北京)有限公司 一种信息处理方法及电子设备
CN104599669A (zh) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 一种语音控制方法和装置
US10504509B2 (en) * 2015-05-27 2019-12-10 Google Llc Providing suggested voice-based action queries
CN105957530B (zh) * 2016-04-28 2020-01-03 海信集团有限公司 一种语音控制方法、装置和终端设备
US10586535B2 (en) * 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
CN110691160A (zh) * 2018-07-04 2020-01-14 青岛海信移动通信技术股份有限公司 一种语音控制方法、装置及手机
CN109584879B (zh) * 2018-11-23 2021-07-06 华为技术有限公司 一种语音控制方法及电子设备
CN110060672A (zh) * 2019-03-08 2019-07-26 华为技术有限公司 一种语音控制方法及电子设备
CN110225386B (zh) * 2019-05-09 2021-09-14 海信视像科技股份有限公司 一种显示控制方法、显示设备

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230152871A1 (en) * 2021-11-16 2023-05-18 Asustek Computer Inc. Electronic device and connecting device

Also Published As

Publication number Publication date
CN114007117A (zh) 2022-02-01
EP4181122A4 (de) 2024-01-10
EP4181122A1 (de) 2023-05-17
WO2022022289A1 (zh) 2022-02-03
CN114007117B (zh) 2023-03-21

Similar Documents

Publication Publication Date Title
US11073983B2 (en) Display method and apparatus
US20170235435A1 (en) Electronic device and method of application data display therefor
US20200413120A1 (en) Method of controlling the sharing of videos and electronic device adapted thereto
US12032820B2 (en) Fast data copying method and electronic device
US11647108B2 (en) Service processing method and apparatus
TWI438675B (zh) 提供情境感知援助說明之方法、裝置及電腦程式產品
US9811510B2 (en) Method and apparatus for sharing part of web page
US20150193424A1 (en) Method of changing dynamic screen layout and electronic device
US20150288629A1 (en) Electronic device and method of providing information by electronic device
US20150213127A1 (en) Method for providing search result and electronic device using the same
EP3896596A1 (de) Informationsverarbeitungsvorrichtung, informationsverarbeitungsverfahren und programm
CN107436948B (zh) 文件搜索方法、装置及终端
US20150317979A1 (en) Method for displaying message and electronic device
CN107644016A (zh) 一种多媒体字幕翻译方法、多媒体字幕查找方法及装置
WO2019233316A1 (zh) 数据处理方法、装置、移动终端以及存储介质
US9921735B2 (en) Apparatuses and methods for inputting a uniform resource locator
CN105095161B (zh) 一种显示富文本信息的方法及装置
US10108388B2 (en) Display apparatus and controlling method thereof
CN105653112B (zh) 一种显示浮层的方法及装置
US20230317071A1 (en) Control display method and device
US10083164B2 (en) Adding rows and columns to a spreadsheet using addition icons
US20160004784A1 (en) Method of providing relevant information and electronic device adapted to the same
US20230409813A1 (en) Document processing method, apparatus and device, and medium
WO2024046315A1 (zh) 一种输入设备的事件处理方法和装置
US10331334B2 (en) Multiple transparent annotation layers for use within a graphical user interface

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION