WO2022022289A1 - 一种控件显示方法和设备 - Google Patents

一种控件显示方法和设备 Download PDF

Info

Publication number
WO2022022289A1
WO2022022289A1 PCT/CN2021/106385 CN2021106385W WO2022022289A1 WO 2022022289 A1 WO2022022289 A1 WO 2022022289A1 CN 2021106385 W CN2021106385 W CN 2021106385W WO 2022022289 A1 WO2022022289 A1 WO 2022022289A1
Authority
WO
WIPO (PCT)
Prior art keywords
control
interface
application
voice command
response
Prior art date
Application number
PCT/CN2021/106385
Other languages
English (en)
French (fr)
Inventor
陈浩
高璋
陈晓晓
熊石一
殷志华
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21849198.3A priority Critical patent/EP4181122A4/en
Priority to US18/006,703 priority patent/US20230317071A1/en
Publication of WO2022022289A1 publication Critical patent/WO2022022289A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/422Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
    • H04N21/42203Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS] sound input device, e.g. microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/432Content retrieval operation from a local storage medium, e.g. hard-disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present application relates to the technical field of voice control, and in particular, to a control display method and device.
  • the present application provides a control display method and device, so that the same control can be displayed in different application interfaces, thereby improving user satisfaction. Specifically, the following technical solutions are disclosed:
  • the present application provides a control display method, which can be applied to an electronic device, the electronic device includes a display screen, and displays a first interface of a first application on the display screen, wherein the first interface includes A first control
  • the method includes: receiving a wake-up word input by a user, and in response to the received wake-up word, displaying a second interface of the first application, where the second interface includes the first control and
  • the second control receives the switching operation of the user, displays the first interface of the second application on the display screen, and the first interface of the second application includes the first control; receives the wake-up input again by the user word; in response to the received wake-up word, displaying a second interface of the second application, where the second interface of the second application includes the first control and the third control.
  • the electronic device when the electronic device receives the wake-up word input by the user, it automatically adds and displays the second control not in the current interface on the current interface of the first application, thereby realizing the second control associated with the first application. Automatic addition and display of controls; when the first application switches to the second application, the third control is automatically added and displayed on the current interface of the second application, so that the user can switch applications on the display screen of the electronic device. Displaying the first control in the original first application and the third control in the second application ensures that all controls associated with different applications are displayed on the display screen of the electronic device, thereby enhancing the voice service function of the electronic device and improving the user experience. satisfaction.
  • the method in response to the received wake-up word, before displaying the second interface of the first application, the method further includes: according to the first application's The first interface type acquires a first set of components, and the first set of components includes the second control.
  • the first component set further includes a third control, a fourth control, and the like.
  • the component set includes at least one voice control, so that the voice control can be automatically added according to the interface type of the current interface.
  • the component set is also called a "virtual component set".
  • the component set displayed in the first interface of the first application is called the first virtual component set
  • the first interface type of the first application is called a first virtual component set
  • the included set of components is called the second virtual set of components.
  • the method in response to the received wake-up word, before displaying the second interface of the second application, the method further includes: according to the second application The first interface type of , obtains a second set of components, and the second set of components includes the third control.
  • the correspondence between the first interface type of the second application and the second component set is established, and the second component set includes a third control, so that the third control can be automatically added according to the current interface type .
  • the second interface of the first application further includes: prompt information corresponding to the second control.
  • the prompt information may be: next episode, previous episode, play/pause, selection, and the like.
  • the method further includes: displaying a third interface of the first application in response to a first voice instruction, where the third interface includes executing the The service response output after the operation corresponding to the first voice command.
  • the control in the first application starts and executes the first voice command and then outputs a service response, which is displayed on the third interface, thereby providing the user with a corresponding voice service.
  • displaying the third interface of the first application in response to the first voice instruction includes: starting the second control, executing the first An operation corresponding to a voice command, and the service response is displayed on the third interface of the first application; or, the electronic device receives the service response sent by the server, and displays the service response on the third interface of the first application. Display the service response.
  • the function of the second control can be realized by invoking the server, which enhances the service capability of the electronic device, thereby providing the user with all the voice control functions displayed on the current interface, improving the performance of the second control. customer satisfaction.
  • displaying the second interface of the first application includes: displaying a control of the second control on the second interface of the first application icon; or, displaying the control icon of the second interface and the prompt information of the second control on the second interface of the first application.
  • the control icon of the second control and the prompt information are added and displayed on the current interface, so as to facilitate the user to issue voice commands according to the prompt information, and improve the efficiency of voice interaction.
  • the second interface of the first application further includes a control icon of a fourth control, and the fourth control is used to execute the second voice command corresponding to the operation, the control icon of the second control is the first color, the control icon of the fourth control is the second color, and the first color is different from the second color; in response to the first voice command, the The electronic device starts the second control and executes the operation corresponding to the first voice command; in response to the second voice command, the electronic device sends an instruction signal to the server, where the instruction signal is used to instruct the server to execute the operation corresponding to the second voice instruction.
  • the electronic device uses different colors to distinguish controls that can provide voice services locally and controls that cannot provide voice services.
  • the icons of controls that are supported locally are displayed in the first color
  • the icons of controls that are not supported locally are displayed in the first color.
  • Two-color display which is convenient for users to identify and distinguish.
  • the second voice command that the electronic device does not support providing a service response locally it can be implemented with the help of a server or other device, and then transmitted to the electronic device, thereby improving the service capability of the electronic device and meeting the needs of users.
  • the present application also provides a control display method, which is applied to an electronic device, the electronic device includes a display screen, and the method includes: receiving a wake-up word input by a user; responding to the received wake-up word word, display the first interface of the first application on the display screen, the first interface includes a first control; receive the first voice command input by the user; in response to the received first voice command, display The second interface of the first application, the second interface includes the first control and a second control, and the second control is used to execute an operation corresponding to the first voice command.
  • the electronic device can display the controls corresponding to any voice command issued by the user on the current interface of the electronic device, so as to provide corresponding services when the user issues the voice command again, so as to provide corresponding services when the user issues the voice command again,
  • the method realizes the automatic addition and display of the second control, increases the voice service function, and improves the user's satisfaction. Spend.
  • the method in response to the received first voice instruction, before displaying the second interface of the first application, the method further includes: obtaining the first Text content corresponding to the voice command, where the text content corresponds to the second control; when the first interface of the first application does not include the second control, the second control is acquired.
  • the acquiring the second control includes: acquiring the second control through an SDK table, where the SDK table includes the text content and the second control.
  • This implementation uses the SDK table to expand the voice control function of the electronic device, and realizes the automatic addition and display of the second control.
  • the SDK table further includes: the first control and the text content corresponding to the first control, the third control and the text content corresponding to the third control, and the like.
  • the method further includes: receiving the first voice instruction input by the user again; and displaying the first voice instruction in response to the first voice instruction A third interface, where the third interface includes a service response output after executing the operation corresponding to the first voice command.
  • displaying the third interface of the first application in response to the first voice instruction includes: starting the second control, executing all the operation corresponding to the first voice command, and display the service response on the third interface of the first application; or, the electronic device receives the service response sent by the server, and displays the service response on the third interface of the first application.
  • the three interfaces display the service response.
  • the function of the second control can be realized by invoking the server, which enhances the voice service capability of the electronic device and improves user satisfaction.
  • using the cloud server to provide service response for the electronic device also avoids software development for the second control locally on the electronic device, and saves software development costs.
  • displaying the second interface of the first application includes: displaying a control of the second control on the second interface of the first application icon; or, displaying the control icon of the second interface and the prompt information of the second control on the second interface of the first application.
  • the second interface of the first application further includes a control icon of a third control, and the third control is used to execute the corresponding operation, the control icon of the second control is the first color, the control icon of the third control is the second color, and the first color is different from the second color; in response to the first voice command , the electronic device starts the second control and executes the operation corresponding to the first voice command; in response to the second voice command, the electronic device sends an instruction signal to the server, where the instruction signal is used to indicate the The server executes the operation corresponding to the second voice instruction.
  • the present application provides a control display device, including a display screen, on which a first interface of a first application is displayed, where the first interface includes a first control, and the device further includes:
  • a receiving module for receiving a wake-up word input by a user; a processing module for instructing the display screen to display a second interface of the first application in response to the received wake-up word, the second interface includes the first control and the second control; and receiving a user's switching operation, instructing the display screen to display a first interface of a second application, where the first interface of the second application includes the first control; the The receiving module is further configured to receive the wake-up word input again by the user; the processing module is further configured to instruct the display screen to display the second interface of the second application in response to the received wake-up word, The second interface of the second application includes the first control and the third control.
  • the processing module is further configured to obtain the first interface type according to the first interface type of the first application before displaying the second interface of the first application.
  • the processing module is further configured to obtain information according to the first interface type of the second application before displaying the second interface of the second application
  • a second set of components includes the third control.
  • the second interface of the first application further includes: prompt information corresponding to the second control.
  • the processing module is further configured to display a third interface of the first application in response to the first voice command, in which the third interface It includes a service response output after executing the operation corresponding to the first voice instruction.
  • the processing module is further configured to start the second control, execute the operation corresponding to the first voice command, and instruct the The service response is displayed on the third interface of an application; or, the service response sent by the server is received through the communication module, and the service response is instructed to be displayed on the third interface of the first application.
  • the processing module is further configured to instruct a control icon of the second control to be displayed on the second interface of the first application; or, instructing A control icon of the second interface and prompt information of the second control are displayed on the second interface of the first application.
  • the second interface of the first application further includes a control icon of a fourth control, and the fourth control is used to execute the second voice command corresponding to the operation, the control icon of the second control is a first color, the control icon of the fourth control is a second color, and the first color is different from the second color.
  • the processing module is further configured to, in response to the first voice command, start the second control and execute the operation corresponding to the first voice command; Send an instruction signal, where the instruction signal is used to instruct the server to perform the operation corresponding to the second voice instruction.
  • the present application further provides a control display device, the device includes a display screen, and the device further includes:
  • the receiving module is configured to receive a wake-up word input by a user; the processing module is configured to instruct the display screen to display a first interface of a first application in response to the received wake-up word, and the first interface includes a first control; The receiving module is further configured to receive the first voice command input by the user; the processing module is further configured to instruct the display screen to display the second interface of the first application in response to the received first voice command , the second interface includes the first control and a second control, and the second control is used to execute an operation corresponding to the first voice command.
  • the processing module is further configured to obtain the text content corresponding to the first voice instruction before displaying the second interface of the first application, so The text content corresponds to the second control; when the first interface of the first application does not include the second control, the second control is acquired.
  • the processing module is further configured to obtain the second control through an SDK table, where the SDK table includes the text content and the first control. Two controls.
  • the receiving module is further configured to receive the first voice instruction input by the user again; the processing module is further configured to respond to the first voice instruction A voice command, instructing the display screen to display a third interface of the first application, where the third interface includes a service response output after performing an operation corresponding to the first voice command.
  • the processing module is further configured to start the second control, execute the operation corresponding to the first voice command, and instruct the display screen Display the service response on the third interface of the first application; or, receive the service response sent by the server through a communication module, and instruct the display screen to display the service on the third interface of the first application response.
  • the processing module is further configured to instruct the display screen to display a control icon of the second control on the second interface of the first application or, instructing the display screen to display the control icon of the second interface and the prompt information of the second control on the second interface of the first application.
  • the second interface of the first application further includes a control icon of a third control, and the third control is used to execute the second voice command corresponding to the operation, the control icon of the second control is a first color, the control icon of the third control is a second color, and the first color is different from the second color.
  • the processing module is further configured to, in response to the first voice command, activate the second control by the electronic device to execute the operation corresponding to the first voice command; and, in response to the second voice command, the The electronic device sends an instruction signal to the server, where the instruction signal is used to instruct the server to perform the operation corresponding to the second voice instruction.
  • the present application further provides an electronic device, the electronic device includes a processor and a memory, and the processor is coupled with the memory, and may further include a transceiver and the like.
  • the memory is used for storing computer program instructions; the processor is used for executing the program instructions stored in the memory, so that the electronic device executes the method in the various implementation manners of the first aspect or the second aspect.
  • Transceivers are used to implement data transmission functions.
  • the electronic device further includes an audio module, a speaker, a receiver, a microphone, and the like. Specifically, after the microphone of the electronic device receives the wake-up word input by the user, it transmits it to the audio module, and the processor processes the wake-up word parsed by the audio module, and in response to the received wake-up word, instructs the display screen to display the wake-up word.
  • the second interface of the first application includes the first control and the second control; the processor is further configured to receive a user's switching operation, instructing the display screen to display the first interface of the second application,
  • the first interface of the second application includes the first control; when the microphone receives the wake-up word input again by the user, the processor is configured to instruct the display screen to display all the wake-up words in response to the received wake-up word.
  • the second interface of the second application includes the first control and the third control.
  • the microphone of the electronic device receives the wake-up word input by the user, and the processor, in response to the received wake-up word, instructs the display screen to display the first interface of the first application, where the first interface includes the first interface. control; the microphone of the electronic device also receives the first voice command input by the user; the processor instructs the display screen to display the second interface of the first application in response to the received first voice command, and the second
  • the interface includes the first control and a second control, and the second control is used to execute an operation corresponding to the first voice command.
  • the present application further provides a computer-readable storage medium, in which instructions are stored, so that when the instructions are executed on a computer or a processor, they can be used to execute the foregoing first aspect and each of the first aspects.
  • the method in this implementation manner, or the aforementioned second aspect and the methods in various implementation manners of the second aspect may also be performed.
  • the present application also provides a computer program product, the computer program product includes computer instructions, when the instructions are executed by a computer or a processor, the methods in the various implementation manners of the first aspect to the second aspect can be implemented .
  • beneficial effects corresponding to the technical solutions of the various implementation manners of the third aspect to the sixth aspect are the same as the beneficial effects of the various implementation manners of the foregoing first aspect and the second aspect.
  • beneficial effects please refer to the foregoing first aspect.
  • the description of the beneficial effects in the various implementation manners of the aspect and the second aspect will not be repeated.
  • FIG. 1 is a schematic diagram of the architecture of a smart device system to which an embodiment of the application is provided;
  • FIG. 3 is a schematic diagram of displaying controls on a first interface of a first application according to an embodiment of the present application
  • 4A is a schematic diagram of displaying a second control on a second interface of a first application according to an embodiment of the present application
  • 4B is a schematic diagram of displaying prompt information on a second interface of a first application according to an embodiment of the present application
  • FIG. 5 is a schematic diagram of jumping to a global response according to a voice command according to an embodiment of the present application
  • FIG. 6 is a flowchart of another control display method provided by an embodiment of the present application.
  • FIG. 7A is a schematic diagram of displaying a second control in a second interface of a first application according to an embodiment of the present application
  • FIG. 7B is a schematic diagram of displaying a third control in a second interface of a second application according to an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a distributed interface supporting all voice commands according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a control display device according to an embodiment of the present application.
  • FIG. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 1 is a schematic structural diagram of a smart device system applied in an embodiment of the present application.
  • the system may include at least one electronic device.
  • the electronic devices include but are not limited to: mobile phone (mobile phone), tablet computer (Pad), personal computer, virtual reality (virtual reality, VR) terminal device, augmented reality (augmented reality, AR) terminal device, wearable device, Television (TV), vehicle terminal equipment, etc.
  • the system shown in FIG. 1 includes a device 101, a device 102, and a device 103, wherein the device 101 is a mobile phone, the device 102 is a tablet computer, and the device 103 is a TV.
  • the system may also include more or less devices, such as a cloud server 104.
  • the cloud server 104 is wirelessly connected to the device 101, the device 102, and the device 103, respectively, so as to realize the device 101. , the interconnection between device 102 and device 103 .
  • each of the above-mentioned electronic devices includes an input and output device, which can be used to receive an operation instruction input by a user through an operation, and to display information to the user.
  • the input and output device may be a variety of independent devices, for example, the input device may be a keyboard, a mouse, a microphone, etc.; the output device may be a display screen and the like. And the input and output device can be integrated on a device, such as a touch screen display.
  • the input and output device may display a user interface (UI) to interact with the user.
  • UI user interface
  • the UI is a medium interface for interaction and information exchange between an application program or an operating system and a user, and is used to realize the conversion between an internal form of information and a form acceptable to the user.
  • the user interface of an application is source code written in a specific computer language such as java, extensible markup language (XML), etc.
  • the interface source code is parsed and rendered on the electronic device, and finally presented to the user.
  • content such as pictures, text, buttons and other controls.
  • Controls also known as widgets, are the basic elements of the user interface. Typical controls include toolbars, menu bars, text boxes, buttons, and scroll bars. (scrollbar), pictures and text.
  • the control can have its own attributes and content, and the attributes and content of the control in the user interface can be defined by tags or nodes. For example, XML specifies the content contained in the interface through nodes such as ⁇ Textview>, ⁇ ImgView>, and ⁇ VideoView>. controls.
  • a node corresponds to a control or property in the user interface. After parsing and rendering, the node is presented as user-visible content.
  • hybrid applications usually also contain web pages in the user interface.
  • a web page also known as a page, can be understood as a special control embedded in the user interface of an application.
  • a web page is source code written in a specific computer language, such as hypertext markup language (HTML), Cascading style sheets (CSS), java script (JavaScript, JS), etc., the source code of web pages can be loaded and displayed as user-identifiable content by browsers or web page display components similar to browsers.
  • the specific content contained in a web page is also defined by tags or nodes in the source code of the web page. For example, HTML defines the elements and attributes of web pages through ⁇ p>, ⁇ img>, ⁇ video>, and ⁇ canvas>.
  • GUI graphical user interface
  • the GUI refers to a user interface related to the operation of an electronic device displayed in a graphical manner. It can be an interface element such as a window, a control, etc. displayed in the display screen of the electronic device.
  • the display forms of the controls include various visual interface elements such as icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, and navigation bars.
  • an integrated development environment (Integrated Development Environment, IDE) is used to develop and generate controls, wherein the IDE integrates multiple functions such as editing, design, and debugging in a common environment, thereby providing developers with a quick and convenient way to There is strong support for developing applications.
  • the IDE mainly includes menus, toolbars and some windows. Wherein, the toolbar can be used to add controls to the form.
  • the window is a small area of the screen, usually a rectangle, that can be used to display information to the user and accept user input.
  • This embodiment provides a control display method, which provides a user with a common voice interaction capability by adding a virtual voice control in the display screen, thereby improving user satisfaction.
  • the method can be applied to any one of the aforementioned electronic devices, specifically, as shown in Figure 2, the method includes:
  • the electronic device When the electronic device receives the wake-up word input by the user, display at least one control in the first interface of the first application on the display screen of the electronic device.
  • the first interface may be the current interface.
  • the electronic device when the electronic device obtains the wake-up word input by the user, it will automatically enter an instruction input state and wait for the user to issue a voice instruction.
  • the wake-up word may be a predefined wake-up word, such as Xiaoyi Xiaoyi, Xiao Ai, etc., or a generalized wake-up word, for example, the user's attention is collected in the camera of the electronic device to focus on the current display When the screen is displayed, or when the user performs voice interaction with the electronic device, when a voice command conforming to the preset voice command set is detected, the electronic device can be woken up and entered into a voice command input state.
  • the first interface of the first application is lit, and at least one control supported by the electronic device is displayed in the first interface, and the at least one control includes the first control.
  • the first control is a control displayed in the current interface when the electronic device is awakened.
  • the first control displayed in the current interface includes any one of the following: Kind: play/pause 31. Turn on/off the barrage 32. Barrage 33. Double speed "double speed” 34, exit " ⁇ " 35, etc.
  • the display form of the first control may be an icon, a button, a menu, a tab, a text box, etc. In this embodiment, the display of the control in the form of an icon is used as an example for description.
  • the first application is a video playback application, such as Huawei Video and Tencent Video
  • the first interface is a video playback interface.
  • the first interface is a text browsing interface.
  • the first control is a common control or a common control in various applications, for example, the common control may be a "play/pause" control, or the common control Any control in a virtual component set.
  • the electronic device Receives the first voice instruction issued by the user. Specifically, the electronic device receives the first voice instruction through a microphone.
  • the first voice command indicates the service that the user expects the current interface to respond to.
  • the first voice command is "play at double speed"
  • the first voice command is The voice command is "zoom in”.
  • the first voice command may also be "present a control icon" or the like.
  • the text content corresponds to the second control
  • the second control is used to execute the operation corresponding to the first voice instruction.
  • traverse all the controls in the first interface of the first application and determine whether there is a second control in the first interface that executes the first voice command, for example, determine whether the first control in the first interface can be executed "2x speed playback" operation.
  • the control icons of all the controls included in the first interface are: play/pause 31. Turn on/off the barrage 32. Barrage 33. Double-speed "double-speed” 34, exit " ⁇ " 35, and find out whether there is a control that can perform the "double-speed playback" operation.
  • the method further includes: feeding back the corresponding service response to the user.
  • the electronic device finds that the control in the first interface can provide the function of "play at 2x speed”, then activate the control accordingly and execute "play at 2x speed” ” operation and display the service response in the current interface.
  • an implementation manner of determining the second control is to search for the second control corresponding to the first voice command through a software development kit (software development kit, SDK) table.
  • the SDK is a collection of development tools used to create application software for a specific software package, software framework, hardware platform, operating system, etc.
  • the SDK is an SDK used for developing applications under the Windows platform. It can not only provide the necessary files for the application program interface (API) for the programming language, but also communicate with an embedded system.
  • the SDK table includes a correspondence between the text content of at least one voice command and at least one control, and the control may be represented by a control icon.
  • an SDK table may include, but is not limited to, the following correspondences: play/pause, next episode, open/close bullet screen, launch bullet screen, double speed, and exit.
  • the SDK table may be pre-stored in the electronic device, or the electronic device may be obtained from a cloud server.
  • the SDK table can be updated in real time and acquired by the electronic device periodically, so as to provide users with rich voice service functions.
  • Another implementation manner of determining the second control includes:
  • the first virtual component set includes one or more controls displayed in the first interface of the first application when the electronic device receives the wake-up word and enters the instruction input state.
  • the second virtual component set includes at least one preset control, and the number of all controls included in the second virtual component set is greater than or equal to the number of controls in the first virtual component set, and the second virtual component set is the same as the number of controls in the first virtual component set.
  • the first interface type of the first application is associated.
  • the first interface type of the first interface includes: video playback, music playback, picture/photo preview, text browsing, and the like.
  • a second control is included in the second virtual component set, and the second control may be a common control. If the "set" control belongs to a voice control in the virtual component set of the interface type of video playback, the second virtual component set is determined to be the virtual component set corresponding to the video playback interface.
  • 105-2 Determine the second control, where the second control belongs to the second virtual component set but does not belong to the first virtual component set.
  • the number of the second controls may be one or more.
  • the first virtual component set includes only one control of "play/pause”
  • the second virtual component set includes: play/pause, next episode, on/off bullet screen, bullet screen, double speed and exit, a total of 6 controls , it is determined that the second control includes all other controls except the "play/pause" control.
  • the second control includes: next episode, open/close bullet screen, launch bullet screen, double speed and exit.
  • step 105 after determining the second control from the SDK table, the electronic device adds the control icon corresponding to the second control to the second interface of the first application. Similarly, if the electronic device determines multiple second controls according to the virtual component set, all the second controls are displayed on the second interface.
  • the control icon corresponding to the "next episode” control is used 36 is displayed on the current video playing interface (ie, the second interface).
  • the second interface further includes a first control, and the control icons corresponding to the first control in this example include: 31. 32. 33. Speed 34 and ⁇ 35.
  • the method further includes: adding prompt information corresponding to the second control to the second interface.
  • Each control corresponds to a prompt information, and each prompt information is used to prompt the voice function corresponding to the control.
  • the user can issue a corresponding voice command according to the prompt information.
  • the control corresponding to the voice command is activated according to the corresponding relationship. For example, as shown in FIG. 4B , when the first voice command input by the user is “play the next episode”, query the first interface shown in FIG.
  • the corresponding relationship between the prompt information and the controls may be stored in the above-mentioned SDK table, or stored separately, which is not limited in this embodiment.
  • the prompt information and the text content may be the same or different.
  • the first voice instruction issued by the user again may include more voice content other than the prompt information, which is not limited in this embodiment.
  • control icon corresponding to the second control and the prompt information can be displayed together in the blank area of the current interface, or it can also be added in the form of a floating window.
  • this embodiment does not limit the specific addition method.
  • the electronic device when the electronic device is woken up by the user, the electronic device can display the controls corresponding to any voice command issued by the user on the current interface of the electronic device, so as to provide the corresponding control for the user to issue the voice command again. to avoid the defect that the voice command issued by the user cannot be executed in the current interface of the electronic device due to different controls on different application interfaces.
  • This method uses the SDK table or virtual component set to expand the voice control function of the electronic device. The automatic addition and display of the second control is realized, the service function of the voice and text content is enhanced, and the user's satisfaction is improved.
  • the above method also includes:
  • 106 Activate the second control and execute the operation corresponding to the first voice command, so as to provide a voice service for the user.
  • a possible implementation is to directly start the second control after using the SDK table to display the second control on the second interface of the first application, and execute the corresponding text content of the first voice command operation and output the service response.
  • the text content corresponding to the first voice command may include prompt information corresponding to the second control, or may correspond to the second control.
  • the prompt information is the same, start the second control, execute the operation corresponding to the first voice command, and output a service response.
  • the voice instruction when receiving a voice instruction of "play the next episode" (or "next episode") again issued by the user, parse the voice instruction to obtain the text content including "next episode", and start the "next episode” Controls that perform the operation of playing the "next episode” to provide voice services to the user.
  • 106-1 Detect whether the second control in the second interface of the electronic device can execute the operation corresponding to the first voice command, that is, determine whether the second control can provide functional services for the first voice command.
  • the electronic device can obtain a service response through the cloud server or other electronic devices, and the service response is executed by the cloud server or other electronic devices.
  • the operation corresponding to the voice command is generated and transmitted to the electronic device, and the electronic device receives and displays the service response on the display screen.
  • the cloud server or the second electronic device performs enlargement processing on the original picture.
  • the cloud server may also send the original picture to other electronic devices with the function of "enlargement picture", obtain the enlarged picture, and finally send the enlarged picture to the electronic device.
  • the method utilizes a cloud server to provide a service response for the electronic device, also avoids software development for the second control locally on the electronic device, and saves software development costs.
  • the second control can provide the functional service, start the second control, execute the operation corresponding to the first voice command, and output a service response.
  • the specific process is the same as the foregoing step 104, and will not be repeated here.
  • the method further includes: when the electronic device receives the second voice command issued by the user, obtains the text content corresponding to the second voice command, and the text content corresponding to the second voice command is obtained. Different from the text content of the first voice command in the above-mentioned step 102, but the text content corresponding to the second voice command is the same as the text content of a second control that has been added in step 105, then start the added second control , and execute the operation corresponding to the text content of the second voice instruction, and output the corresponding service response.
  • the second voice command is “Close the bullet screen”, and the “Close the bullet screen” is different from the "Play the next episode” of the first voice command.
  • “Close the bullet screen” has been added. ”, the second control is activated, and the operation of “close the barrage” is executed, and displayed to the user through the current interface.
  • control that executes the text content of the second voice instruction may be one of the multiple second controls determined by the virtual component set, or may also be one of the multiple controls originally included in the electronic device. , which is not limited in this embodiment.
  • the method further includes: the electronic device differentiates and displays the control that can provide the voice service locally and the control that cannot provide the voice service locally, for example, using the first color ( For example, green) display, and the control that is not supported locally is displayed in the second interface in a second color (such as red), so as to facilitate the user to identify and distinguish.
  • the first color For example, green
  • the control that is not supported locally is displayed in the second interface in a second color (such as red), so as to facilitate the user to identify and distinguish.
  • a second color such as red
  • the functions of all the second controls can be realized by calling the cloud server, which enhances the service capability of the electronic device, thereby providing the user with all the voice control functions displayed on the current interface. , improve user satisfaction.
  • the service response corresponding to the displayed second control may include an interface response and a global response.
  • the service response output by the electronic device includes an interface response and a global response .
  • the interface response means that the electronic device does not need to jump from the current first application to the second application when performing a certain operation, and can be completed on the interface of the current first application. For example, the above operations such as "play the next episode”, “close the barrage”, “enlarge the picture” and so on.
  • the global response means that the electronic device needs to jump from the current first application to the second application when performing a certain operation, and provide a service response through the interface of the second application.
  • a possible implementation method includes: the interface of the first application is a picture preview interface, when the user issues a voice command of "music play", according to the description of the above steps 103 and 105, it is first determined that the control needs to be added is the "Music Play” control, and then add the "Music Play" control icon in the picture preview interface Then jump to the application interface corresponding to "music play", such as the second application. At this time, the interface of the second application is the music playing interface.
  • step 106 when the voice command input by the user is received directly or again
  • the "music playing" control is activated and the operation corresponding to the music playing instruction is executed, so as to provide the user with the function of playing music.
  • the voice command of "playing music” is a switching command, and the electronic device performs an interface switching operation after receiving the switching command.
  • the interface of the first application or the second application includes: video playback, music playback, picture/photo preview, text browsing, dialing, and messaging interfaces.
  • the voice command issued by the user may be referred to as interface voice; for the global response, the voice command issued by the user may be referred to as global voice.
  • the above-mentioned controls of "music play” may be displayed on the application interface of the picture preview in the form of a floating window, and controls such as music list, song name, play/pause, etc. may be displayed in the floating window.
  • a list of programs can also be displayed in the list, such as a list of programs currently being broadcast live by all TV channels.
  • This embodiment also provides another control display method, which is different from the first embodiment in that, before the user issues the first voice command, this embodiment has already determined the second control and displays the second control in the application of the electronic device interface in order to provide users with rich service responses.
  • a first interface of the first application is displayed on the display screen of the electronic device, and the first interface includes a first control, as shown in FIG. 6 , the method includes:
  • the electronic device receives a wake-up word input by the user.
  • 202 In response to the received wake-up word, display a second interface of the first application, where the second interface includes the first control and the second control. Specifically, including:
  • the first set of virtual components is associated with a first interface of the first application and includes one or more controls displayed in the first interface when the electronic device is awakened.
  • the controls displayed in the first interface are: exit " ⁇ " 71, download 72. Message column 73. Directory 74. Eye protection brightness 75. Voice reading 76. Read the setting "Aa" 77, and the set composed of these controls is the first virtual component set.
  • the second virtual component set is associated with a first interface type of the first application, the first interface type is text browsing, and the virtual component set corresponding to the text browsing includes at least one common control.
  • the commonly used controls may include all controls in the first virtual component set, and the number of all controls contained in the second virtual component set is greater than or equal to the number of controls in the first virtual component set.
  • the commonly used controls can be created and added by using the SDK.
  • a method for obtaining the second virtual voice component set is: the electronic device according to the first interface type of the first application, wherein there is a corresponding relationship between the interface type of each application and a virtual component set, as follows As shown in Table 2, the electronic device can use the corresponding relationship to determine the virtual component set corresponding to the interface type of the current application, that is, the second virtual component set.
  • each interface type corresponds to a virtual component set.
  • the text browsing interface corresponds to "virtual component set 4", and when it is determined that the virtual component set 4 is the second virtual control set, the second virtual control set includes all the controls in the virtual component set 4.
  • the above-mentioned corresponding relationship can also be combined with the SDK table of Embodiment 1 to form a new relationship table, and the new relationship table includes interface types, virtual component sets, control icons contained in each virtual component set, And the corresponding prompt information of each control and so on.
  • the control icon and prompt information corresponding to each second control may be displayed together in the blank area of the second interface; or displayed in the form of a floating window.
  • the blank area may be understood as an area not covered by controls.
  • the size of the existing control icons in the interface can be reduced or moved to make a blank area, and then the control icons and prompt information can be displayed in the blank area.
  • the embodiment does not limit the display position and manner of the second control.
  • the e-book APP application displays a first interface, and the electronic device acquires the first virtual component set according to the first interface of the first application, so
  • the first virtual component set described above includes the following controls: exit " ⁇ " 71, download 72. Message column 73. Directory 74. Eye protection brightness 75. Voice reading 76. Read setting "Aa" 77.
  • the second controls to be added are "previous chapter” 78 and "next chapter” 79, then the control icon corresponding to the second control is ""
  • the previous chapter” 78 and the "next chapter” 79 are added to the second interface of the first application.
  • the method further includes: starting the second control, executing an operation corresponding to the second control, and outputting a service response.
  • the specific execution process is the same as "step 106" in the first embodiment.
  • the switching operation corresponds to a global response.
  • the switching operation may be manual switching by the user, or the switching operation may be initiated through a voice command input by the user.
  • the electronic device receives and parses the voice command, and performs the operation of switching the interface.
  • the second application is a video playback application
  • the interface of the video playback application includes the first control, as shown in FIG. 7B , in the second application
  • the first interface includes the following first controls: play/pause 31. Turn on/off the barrage 32. Barrage 33. Double speed “double speed” 34, exit “ ⁇ " 35, wherein the exit control " ⁇ " 35 is the same as the control " ⁇ " 71 in the first interface of the first application.
  • 205 In response to the received wake-up word, display a second interface of the second application, where the second interface of the second application includes the first control and the third control.
  • the component set corresponding to the first interface type of the second application is a third virtual component set.
  • the electronic device determines that the interface type corresponding to the current video playback application is the "video playback" interface, and searches for "video playback” according to Table 2 above. ” interface corresponds to “Virtual Component Set 1”, which includes the following controls: Play/Pause, Next Episode, Open/Close Bullet Chatting, Send Bullet Chatting, Double Speed, and Exit.
  • the third control is the "next episode”
  • the control icon of the "next episode” is 36 is added on the second interface of the second application, and the specific adding process is the same as that of the foregoing embodiment 1, which is not repeated in this embodiment.
  • the above method further includes: starting a third control, executing an operation corresponding to the third control, and displaying the output service response on the interface of the second application.
  • start the control "next episode” 36 and execute the voice command operation of "play the next episode”
  • the fourth control is activated to execute the corresponding voice command issued by the current user. operation, and the response result is displayed on the current interface of the second application.
  • the fourth control can be play/pause 31. Turn on/off the barrage 32. Barrage 33. Double speed "double speed” 34 is any one of exit " ⁇ " 35.
  • electronic devices can also use different colors or logos to distinguish controls that can provide voice services and controls that cannot provide voice services, and for controls that cannot provide voice services locally on electronic devices, the cloud server can be used to realize the control function, so as to provide users with Provides rich voice service functions.
  • the cloud server can be used to realize the control function, so as to provide users with Provides rich voice service functions.
  • a virtual component set corresponding to each interface type is set, and the set of virtual components is compared with the controls contained in the current interface, so as to determine the commonly used controls that are missing in the current interface, and use these controls
  • It is automatically added to the current interface of the electronic device for example, when the electronic device receives the wake-up word input by the user, it is automatically added and displayed on the current interface of the first application and displays a second control that is not present in the current interface, realizing the first application.
  • the automatic addition and display of the associated second control ensures that the same voice control is displayed on the same application.
  • the method is used to realize the voice controls for displaying "previous chapter” and "next chapter” in the interfaces of different e-book applications, so as to facilitate the user's voice interaction and improve the user experience.
  • a third control is automatically added and displayed on the current interface of the second application, so that when the user switches applications, the display screen of the electronic device can be displayed according to the interface type of the current application. It can display all the controls corresponding to the interface type. For example, when switching the e-book application to the video playback application, the voice control of the "next episode" that is missing in the current interface can be automatically added and displayed on the video playback interface, so as to realize All voice controls associated with different applications are displayed on the display screen of the electronic device, thereby enhancing the voice service function of the electronic device and improving user satisfaction.
  • the prompt information corresponding to the newly added control is also displayed.
  • the prompts can include the following prompts:
  • Tip 1 Display text in the search box or outside the search box or floating annotation text, such as "please say the content of the search, such as the 100th element, beautiful pen", etc., and highlight these annotation text.
  • Tip 2 Display text in the search box or outside the search box or display floating annotation text.
  • the search text can be generalized information, such as "search for pictures, search for information", or it can be hot words, such as “Tucao conference variety show” , “Coronavirus”, etc.
  • the electronic device After the user speaks the search voice content according to the above prompt, the electronic device automatically uses the preset text to quickly search, finds the result in the database, and outputs a service response.
  • it also includes: automatically creating and updating a control set, so as to provide rich voice service functions for different electronic devices.
  • a possible implementation is to use IDE to develop and create various voice controls.
  • a voice environment includes devices such as mobile phones, TVs, and car devices, and each device contains different voice controls.
  • the functions of the voice controls supported by each are also different.
  • the voice commands that can be provided by the virtual component set of the mobile phone terminal include ⁇ A, B, C, D ⁇ ;
  • the voice commands supported by the virtual component set of the TV include ⁇ A, B, C, D, E, F, G ⁇ ;
  • the voice commands supported by the virtual component set of the vehicle include ⁇ F ⁇ .
  • a set of common virtual components predefined by the system such as the voice commands that can be supported by the voice control developed in the IDE environment using the SDK, including ⁇ A, B, C, D, E, F, G ⁇ , covering multiple All voice commands in the distributed interface of the device.
  • At least one target control is added to devices such as mobile phones, TVs, and car devices through the virtual component set of all voice commands integrated by the SDK, so as to ensure that each device has the ability to execute all voice commands. , to improve the user experience.
  • the IDE is used to create and develop a new virtual component set, including voice controls that can execute all voice commands in the distributed interface, and these controls are automatically added to different electronic devices, thereby enhancing the performance of the electronic devices.
  • Voice service capability In addition, each electronic device also supports remote voice capability invocation, such as obtaining the service response of the target control from the cloud server, thereby avoiding the secondary development of newly added controls locally on each electronic device and saving software development costs.
  • the virtual component set described in the above embodiments is also called a "component set”.
  • the second virtual component set may be called a "first component set”
  • the third virtual component set may be called a "first component set”.
  • the first component set has an associated relationship with the first interface type of the first application
  • the second component set has a certain association relationship with the first interface type of the second application
  • the first interface type includes but is not limited to video Playback, music playback, picture/photo preview, text browsing, etc.
  • the first application and the second application may be application APPs such as video playback, voice playback, and picture/photo preview.
  • FIG. 9 is a schematic structural diagram of a control display device according to an embodiment of the present application.
  • the apparatus may be an electronic device, or a component located in the electronic device, such as a chip circuit.
  • the device can implement the control adding method in the foregoing embodiments.
  • the apparatus may include: a receiving module 901 and a processing module 902 .
  • the apparatus may further include other units or modules such as a communication module, a storage unit, etc., which are not shown in FIG. 9 .
  • the apparatus further includes a display screen for displaying at least one control.
  • the receiving module 901 is configured to receive the wake-up word input by the user; the processing module 902 is configured to instruct the display screen to display the second interface of the first application in response to the received wake-up word, wherein the second interface includes: the first control and secondary control.
  • the processing module 902 is further configured to receive a user's switching operation, instructing the display screen to display the first interface of the second application, and the first interface of the second application includes the first control;
  • the receiving module 901 is further configured to receive the wake-up word input again by the user; the processing module 902 is further configured to, in response to the received wake-up word, instruct the display screen to display the second interface of the second application, wherein the second interface of the second application Including: the first control and the third control.
  • the processing module 902 is further configured to acquire the first component set according to the first interface type of the first application before displaying the second interface of the first application. wherein the first set of components includes the second control.
  • the processing module 902 is further configured to acquire the second component set according to the first interface type of the second application before displaying the second interface of the second application. wherein the second set of components includes the third control.
  • the processing module 902 may acquire the first set of components and the second set of components from a storage unit.
  • the second interface of the first application further includes: prompt information corresponding to the second control.
  • the processing module 902 is further configured to, in response to the first voice command, instruct the display screen to display a third interface of the first application, the third interface It includes the service response output after executing the operation corresponding to the first voice command.
  • the processing module 902 is further configured to start the second control, execute the operation corresponding to the first voice command, and instruct the first application
  • the third interface of the first application displays the service response; or, the communication module receives the service response sent by the server, and instructs to display the service response on the third interface of the first application.
  • the communication module has a data sending and receiving function.
  • the processing module 902 is further configured to instruct to display the control icon of the second control on the second interface of the first application;
  • the second interface of the first application displays a control icon of the second interface and prompt information of the second control.
  • the second interface of the first application further includes a control icon of a fourth control, the fourth control is used to execute the operation corresponding to the second voice command, and the control icon of the second control is the first color , the control icon of the fourth control is a second color, and the first color is different from the second color.
  • the processing module 902 is further configured to, in response to the first voice command, start the second control and execute the operation corresponding to the first voice command; and, in response to the second voice command, send an instruction signal to the server through the communication module , the instruction signal is used to instruct the server to perform the operation corresponding to the second voice instruction.
  • processing module 902 is further configured to instruct the display screen to display the first service response or the second service response, and the first service response processing module 902 performs the operation corresponding to the first voice command to output the service response;
  • the second service response is the received service response sent from the server, and the service response is output by the server after executing the second voice instruction.
  • the receiving module 901 is configured to receive a wake-up word input by the user; the processing module 902 is configured to instruct the display screen to display the first interface of the first application in response to the received wake-up word , wherein the first interface includes a first control.
  • the receiving module 901 is further configured to receive the first voice command input by the user; the processing module 902 is further configured to instruct the display screen to display the second interface of the first application in response to the received first voice command, wherein
  • the second interface includes the first control and a second control, and the second control is used to execute an operation corresponding to the first voice command.
  • the processing module 902 is further configured to obtain the text content corresponding to the first voice command before instructing the display screen to display the second interface of the first application, The text content corresponds to the second control; when the first interface of the first application does not include the second control, the second control is acquired.
  • the processing module 902 is further configured to obtain the second control through an SDK table, where the SDK table includes the text content and the second control. .
  • the receiving module 901 is further configured to receive the first voice instruction input by the user again; the processing module 902 is further configured to respond to the first voice instruction , instructing the display screen to display a third interface of the first application, where the third interface includes a service response output after executing the operation corresponding to the first voice command.
  • FIG. 10 shows a schematic structural diagram of an electronic device.
  • the device includes a processor 110 and a memory 120, and further includes: a USB interface 130, a power management module 140, a battery 141, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, The receiver 170B, the microphone 170C, the headphone jack 170D, the sensor module 180, the buttons 191, the camera 192, the display screen 193, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or less components than shown, or combine some components, or separate some components, or arrange different components.
  • the illustrated components may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may be composed of an integrated circuit (Integrated Circuit, IC), for example, may be composed of a single packaged IC, or may be composed of a plurality of packaged ICs connected with the same function or different functions.
  • the processor 110 may include a central processing unit (central processing unit, CPU) or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • the processor 110 may also include a hardware chip.
  • the hardware chip may be an application specific integrated circuit (ASIC), a programmable logic device (PLD) or a combination thereof.
  • ASIC application specific integrated circuit
  • PLD programmable logic device
  • the above-mentioned PLD can be a complex programmable logic device (CPLD), a field-programmable gate array (FPGA), a general-purpose array logic (generic array logic, GAL) or any combination thereof.
  • CPLD complex programmable logic device
  • FPGA field-programmable gate array
  • GAL general-purpose array logic
  • the processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver ( universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, SIM interface and/or universal serial bus (universal serial bus) serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART mobile industry processor interface
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • the memory 120 is used to store and exchange various types of data or software, including the SDK table, the first voice command, the second voice command, the text content corresponding to the first voice command and the second voice command, the first virtual component set, the first voice command and the second voice command.
  • Two virtual component sets, control icons, etc. are also used to store audio, video, pictures/photos and other files. Additionally, computer program instructions or code may be stored in memory 120 .
  • the memory 120 may include volatile memory (volatile memory), such as random access memory (Random Access Memory, RAM); may also include non-volatile memory (non-volatile memory), such as read-only storage memory (read only memory, ROM), flash memory (flash memory), hard disk (Hard Sisk Drive, HDD) or solid-state drive (Solid-State Drive, SSD), the memory 120 may also include a combination of the above-mentioned types of memory.
  • volatile memory such as random access memory (Random Access Memory, RAM)
  • non-volatile memory such as read-only storage memory (read only memory, ROM), flash memory (flash memory), hard disk (Hard Sisk Drive, HDD) or solid-state drive (Solid-State Drive, SSD)
  • ROM read-only storage memory
  • flash memory flash memory
  • HDD Hard Sisk Drive
  • SSD solid-state drive
  • the display screen 193 can be used to display control icons and prompt information corresponding to the first control, the second control, and the third control, and display different application interfaces, such as the first interface and the second interface of the first application, and the first interface of the second application.
  • the display screen 193 can also display pictures, photos, text information, play media streams such as video/audio, and the like.
  • the display screen 193 may include a display panel and a touch panel.
  • the display panel may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an organic light-emitting diode (Organic Light-Emitting Diode, OLED), and the like.
  • the touch panel is also referred to as a touch screen, a touch-sensitive screen, or the like.
  • the electronic device 100 may include one or N display screens 193 , where N is a positive integer greater than one.
  • the audio module 170, the speaker 170A, the receiver 170B, and the microphone 170C can realize the voice interaction between the user and the electronic device.
  • the audio module 170 includes an audio circuit, which can transmit the received audio data converted signal to the speaker 170A, and the speaker 170A converts it into a sound signal and outputs it.
  • the microphone 170C is used to receive a sound signal input by the user, such as a wake-up word, a first voice command, a second voice command, etc., convert the received sound signal into an electrical signal, and then transmit it to the audio module 170. After the audio module 170 receives it, it will The electrical signal is converted into audio data, and then the audio data is output to the processor 110 for further processing to obtain text content corresponding to the voice command.
  • a sound signal input by the user such as a wake-up word, a first voice command, a second voice command, etc.
  • the sensor module 180 may include at least one sensor, such as a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a touch sensor, a fingerprint sensor, and the like.
  • a pressure sensor such as a pressure sensor, a gyro sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a touch sensor, a fingerprint sensor, and the like.
  • the keys 191 include a power-on key, a volume key, and the like.
  • the USB interface 130 is an interface that conforms to the USB standard specification, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device, and can also be used to transmit data between the electronic device and peripheral devices. It can also be used to connect headphones to play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as virtual reality devices.
  • the power management module 140 is used for connecting the battery 141 and the processor 110 .
  • the power management module 140 supplies power to the processor 110, the memory 120, the display screen 193, the camera 192, the mobile communication module 150, the wireless communication module 160, and the like.
  • the power management module 140 may be provided in the processor 110 .
  • the wireless communication function of the electronic device can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor (or baseband chip).
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in an electronic device can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the mobile communication module 150 can provide a wireless communication solution including 2G/3G/4G/5G etc. applied on the electronic device.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA) and the like. In some embodiments, at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110 .
  • the wireless communication module 160 can provide applications on electronic devices including wireless local area networks (WLAN) (such as wireless fidelity (WiFi) networks), bluetooth (BT), global navigation satellite systems (global Navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • WiFi wireless fidelity
  • BT Bluetooth
  • GNSS global navigation satellite systems
  • FM frequency modulation
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , perform frequency modulation on it, amplify it, and convert it into electromagnetic waves for radiation through the antenna 2 .
  • the antenna 1 of the electronic device is coupled with the mobile communication module 150, and the antenna 2 is coupled with the wireless communication module 160, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband Code Division Multiple Access (WCDMA), Time Division Code Division Multiple Access (TD-SCDMA), Long Term Evolution (LTE), BT, GNSS, WLAN, NFC , FM and/or IR technology, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA broadband Code Division Multiple Access
  • TD-SCDMA Time Division Code Division Multiple Access
  • LTE Long Term Evolution
  • BT Long Term Evolution
  • GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (GLONASS), and a Beidou navigation satellite system (BDS).
  • the method shown in FIG. 2 or FIG. 6 can be implemented, and in the device shown in FIG. 9, the function of the receiving module 901 can be controlled by the audio module. 170 or the microphone 170C in the audio module 170 , the function of the processing module 902 may be implemented by components such as the processor 110 and the display screen 193 ; the function of the storage unit may be implemented by the memory 120 .
  • an embodiment of the present application also provides a system, which includes at least one of the electronic devices, and may also include a server, such as a cloud server, for implementing the control display method in the foregoing embodiments.
  • a server such as a cloud server
  • the structure of the server may be the same as or different from the structure of the electronic device shown in FIG. 10 , which is not limited in this embodiment.
  • an embodiment of the present application further provides a computer storage medium, wherein the computer storage medium may store a program, and when the program is executed, the program may include some or all of the steps of the control adding method provided by the present application.
  • the storage medium includes, but is not limited to, a magnetic disk, an optical disk, a ROM, or a RAM.
  • all or part of the implementation may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer program instructions, and when the computer loads and executes the computer program instructions, all or part of the method processes or functions described in the above-mentioned embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • Acoustics & Sound (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请公开了一种控件显示方法和设备,该方法应用于一种包括显示屏的电子设备上,在显示屏上的第一界面中包括第一控件,方法包括:接收用户输入的唤醒词,响应于接收到的所述唤醒词显示第一应用的第二界面,所述第二界面中包括第一控件和第二控件,接收用户的切换操作,显示第二应用的第一界面,所述第二应用的第一界面中包括第一控件;接收用户再次输入的所述唤醒词,响应于所述唤醒词显示第二应用的第二界面,所述第二应用的第二界面中包括第一控件和第三控件。本方法实现了对不同应用所关联的语音控件的自动添加和显示,保证在不同应用的界面上显示相同数量和种类的语音控件,从而增强电子设备的语音服务功能,提高用户的满意度。

Description

一种控件显示方法和设备
本申请要求于2020年7月28日提交中国专利局、申请号为202010736457.4、发明名称为“一种控件显示方法和设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及语音控制技术领域,尤其是涉及一种控件显示方法和设备。
背景技术
随着用户设备形态和功能的飞速发展,用户与各种设备之间的交互变得频繁,打破设备之间的界限,使用户可以在多种设备上进行无缝的交互已成为一种向着智能互联方向演变的大趋势。
例如,用户在语音唤醒电视机(television,TV),并下发语音指令播放视频时,由于不同应用界面在设计上存在差异,所以导致同一语音指令在不同视频播放应用的界面中无法响应一致。比如在“腾讯视频”中支持“下一集”的语音服务响应,当TV被唤醒且接收到用户下达的播放“下一集”的语音指令时,可识别并自动执行该播放“下一集”的系统事件,并反馈响应给用户。但同样是“下一集”的语音指令在其他应用界面中可能无法被执行,比如在另一个应用界面中由于不具备播放“下一集”的控件,所以用户无法获得反馈响应,导致用户的满意度下降。
发明内容
本申请提供了一种控件显示方法和设备,以便在不同的应用界面中都能够显示相同的控件,从而提高用户满意度。具体地,公开了如下技术方案:
第一方面,本申请提供了一种控件显示方法,该方法可应用于一种电子设备,该电子设备包括显示屏,在显示屏上显示第一应用的第一界面,其中第一界面中包括第一控件,所述方法包括:接收用户输入的唤醒词,响应于接收到的所述唤醒词,显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件,接收用户的切换操作,在所述显示屏上显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件;接收用户再次输入的所述唤醒词;响应于接收到的所述唤醒词,显示所述第二应用的第二界面,所述第二应用的第二界面中包括所述第一控件和第三控件。
本方面提供的方法,当电子设备接收到用户输入的唤醒词时,在第一应用的当前界面上自动添加并显示当前界面中没有的第二控件,实现了对第一应用所关联的第二控件的自动添加和显示;当第一应用切换到第二应用时,在第二应用的当前界面上自动添加并显示第三控件,从而实现用户在切换应用时,在电子设备的显示屏上能够显示原第一应用中的第一控件和第二应用中的第三控件,保证在电子设备的显示屏上显示不同应用所关联的所有控件,进而增强了电子设备的语音服务功能,提高了用户的满意度。
结合第一方面,在第一方面的一种可能的实现方式中,响应于接收到的所述唤醒词, 显示所述第一应用的第二界面之前,还包括:根据所述第一应用的第一界面类型获取第一组件集合,所述第一组件集合包括所述第二控件。
可选的,所述第一组件集合中还包括第三控件、第四控件等。
本实现方式中,建立每个应用的界面类型与一个组件集合之间的对应关系,该组件集合中包括至少一个语音控件,从而便于根据当前界面的界面类型自动添加语音控件。
另外,可选的,所述组件集合又称为“虚拟组件集”,比如第一应用的第一界面中显示的组件集合称为第一虚拟组件集,所述第一应用的第一界面类型所包含的组件集合称为第二虚拟组件集。
结合第一方面,在第一方面的另一种可能的实现方式中,响应于接收到的所述唤醒词,显示所述第二应用的第二界面之前,还包括:根据所述第二应用的第一界面类型获取第二组件集合,所述第二组件集合包括所述第三控件。本实现方式中,建立第二应用的第一界面类型与第二组件集合之间的对应关系,该第二组件集合中包括第三控件,从而便于根据当前的界面类型自动添加所述第三控件。
可选的,所述第一应用的第二界面中还包括:与所述第二控件相对应的提示信息。比如所述提示信息可以是:下一集、上一集、播放/暂停、选集等。
结合第一方面,在第一方面的又一种可能的实现方式中,还包括:响应于第一语音指令,显示所述第一应用的第三界面,所述第三界面中包括执行所述第一语音指令对应的操作后输出的服务响应。本实现方式中,当用户下达第一语音指令时,第一应用中的控制启动并执行该第一语音指令后输出服务响应,并显示在第三界面,从而为用户提供相应的语音服务。
结合第一方面,在第一方面的又一种可能的实现方式中,响应于第一语音指令,显示所述第一应用的第三界面,包括:启动所述第二控件,执行所述第一语音指令对应的操作,并在所述第一应用的第三界面显示所述服务响应;或者,所述电子设备接收服务器发送的所述服务响应,并在所述第一应用的第三界面显示所述服务响应。本实现方式中,当电子设备添加完第二控件后,通过调用服务器可实现该第二控件的功能,增强了电子设备的服务能力,从而为用户提供当前界面展示的所有语音控件功能,提高了用户满意度。
结合第一方面,在第一方面的又一种可能的实现方式中,显示所述第一应用的第二界面,包括:在所述第一应用的第二界面显示所述第二控件的控件图标;或者,在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。本实现方式中,将第二控件的控件图标和提示信息一并添加和显示在当前界面,方便用户根据提示信息下达语音指令,提高语音交互效率。
结合第一方面,在第一方面的又一种可能的实现方式中,所述第一应用的第二界面还包括第四控件的控件图标,所述第四控件用于执行第二语音指令对应的操作,所述第二控件的控件图标为第一颜色,所述第四控件的控件图标为第二颜色,且第一颜色与第二颜色不同;响应于所述第一语音指令,所述电子设备启动所述第二控件,执行所述第一语音指令对应的操作;响应于所述第二语音指令,所述电子设备向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
本实现方式,电子设备将本地能够提供语音服务的控件和不能提供语音服务的控件用不同的颜色进行区分,例如将本地支持的控件图标用第一颜色显示,将本地不支持的控件 图标用第二颜色显示,从而方便用户识别和区分。另外,对于电子设备本地不支持提供服务响应的第二语音指令,可以通过服务器或者其它设备帮助实现,然后传输至所述电子设备,从而提高了电子设备的服务能力,满足用户的需求。
应理解,还可以采用其他方式进行区分,例如标识、图案等,本申请对此不做限定。
第二方面,本申请还提供了一种控件显示方法,应用于一种电子设备,所述电子设备包括显示屏,所述方法包括:接收用户输入的唤醒词;响应于接收到的所述唤醒词,在所述显示屏上显示第一应用的第一界面,所述第一界面中包括第一控件;接收用户输入的第一语音指令;响应于接收到的所述第一语音指令,显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件,所述第二控件用于执行所述第一语音指令对应的操作。
本方面提供的方法,当电子设备被用户唤醒后,电子设备可将用户下达的任何语音指令对应的控件都显示到电子设备的当前界面中,从而为用户再次下达语音指令时提供相应的服务,避免因不同应用界面上的控件不同,导致用户下达的语音指令在电子设备的当前界面中无法被执行,本方法实现对第二控件的自动添加和显示,增加语音服务功能,提高了用户的满意度。
结合第一方面,在第二方面的一种可能的实现方式中,响应于接收到的所述第一语音指令,显示所述第一应用的第二界面之前,还包括:获得所述第一语音指令对应的文本内容,所述文本内容对应于所述第二控件;当所述第一应用的第一界面不包括所述第二控件时,获取所述第二控件。
结合第一方面,在第二方面的另一种可能的实现方式中,所述获取所述第二控件,包括:通过SDK表获取所述第二控件,所述SDK表中包括所述文本内容和所述第二控件。本实现方式利用SDK表拓展了电子设备的语音控件功能,实现对第二控件的自动添加和显示。
应理解,所述SDK表中还包括:第一控件和第一控件所对应的文本内容、第三控件和第三控件所对应的文本内容等。
结合第一方面,在第二方面的又一种可能的实现方式中,还包括:再次接收用户输入的所述第一语音指令;响应于所述第一语音指令,显示所述第一应用的第三界面,所述第三界面包括执行所述第一语音指令对应的操作后输出的服务响应。
结合第一方面,在第二方面的又一种可能的实现方式中,响应于所述第一语音指令,显示所述第一应用的第三界面,包括:启动所述第二控件,执行所述第一语音指令对应的操作,并在所述第一应用的第三界面显示所述服务响应;或者,所述电子设备接收服务器发送的所述服务响应,并在所述第一应用的第三界面显示所述服务响应。本实现方式中,通过调用服务器可实现第二控件的功能,增强了电子设备的语音服务能力,提高了用户满意度。
此外,利用云端服务器来为电子设备提供服务响应,还避免了在该电子设备本地对第二控件的软件开发,节约了软件开发成本。
结合第一方面,在第二方面的又一种可能的实现方式中,显示所述第一应用的第二界面,包括:在所述第一应用的第二界面显示所述第二控件的控件图标;或者,在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
结合第一方面,在第二方面的又一种可能的实现方式中,所述第一应用的第二界面还 包括第三控件的控件图标,所述第三控件用于执行第二语音指令对应的操作,所述第二控件的控件图标为第一颜色,所述第三控件的控件图标为第二颜色,所述第一颜色与所述第二颜色不同;响应于所述第一语音指令,所述电子设备启动所述第二控件,执行所述第一语音指令对应的操作;响应于所述第二语音指令,所述电子设备向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
第三方面,本申请提供了一种控件显示装置,包括显示屏,在所述显示屏上显示第一应用的第一界面,所述第一界面中包括第一控件,该装置还包括:
接收模块,用于接收用户输入的唤醒词;处理模块,用于响应于接收到的所述唤醒词,指示所述显示屏显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件;以及接收用户的切换操作,指示所述显示屏显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件;所述接收模块,还用于接收用户再次输入的所述唤醒词;所述处理模块,还用于响应于接收到的所述唤醒词,指示所述显示屏显示所述第二应用的第二界面,所述第二应用的第二界面中包括所述第一控件和第三控件。
结合第三方面,在第三方面的一种可能的实现方式中,所述处理模块还用于显示所述第一应用的第二界面之前,根据所述第一应用的第一界面类型获取第一组件集合,所述第一组件集合包括所述第二控件。
结合第三方面,在第三方面的另一种可能的实现方式中,所述处理模块还用于显示所述第二应用的第二界面之前,根据所述第二应用的第一界面类型获取第二组件集合,所述第二组件集合包括所述第三控件。
可选的,所述第一应用的第二界面中还包括:与所述第二控件相对应的提示信息。
结合第三方面,在第三方面的又一种可能的实现方式中,所述处理模块还用于响应于第一语音指令,显示所述第一应用的第三界面,所述第三界面中包括执行所述第一语音指令对应的操作后输出的服务响应。
结合第三方面,在第三方面的又一种可能的实现方式中,所述处理模块还用于启动所述第二控件,执行所述第一语音指令对应的操作,并指示在所述第一应用的第三界面显示所述服务响应;或者,通过通信模块接收服务器发送的所述服务响应,并指示在所述第一应用的第三界面显示所述服务响应。
结合第三方面,在第三方面的又一种可能的实现方式中,所述处理模块还用于指示在所述第一应用的第二界面显示所述第二控件的控件图标;或者,指示在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
结合第三方面,在第三方面的又一种可能的实现方式中,所述第一应用的第二界面还包括第四控件的控件图标,所述第四控件用于执行第二语音指令对应的操作,所述第二控件的控件图标为第一颜色,所述第四控件的控件图标为第二颜色,所述第一颜色与所述第二颜色不同。
所述处理模块还用于响应于所述第一语音指令,启动所述第二控件,执行所述第一语音指令对应的操作;以及,响应于所述第二语音指令,通过通信模块向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
第四方面,本申请还提供了一种控件显示装置,该装置包括显示屏,此外该装置还包括:
接收模块用于接收用户输入的唤醒词;处理模块用于响应于接收到的所述唤醒词,指示所述显示屏显示第一应用的第一界面,所述第一界面中包括第一控件;所述接收模块还用于接收用户输入的第一语音指令;所述处理模块还用于响应于接收到的所述第一语音指令,指示所述显示屏显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件,所述第二控件用于执行所述第一语音指令对应的操作。
结合第四方面,在第四方面的一种可能的实现方式中,所述处理模块还用于显示所述第一应用的第二界面之前,获得所述第一语音指令对应的文本内容,所述文本内容对应于所述第二控件;当所述第一应用的第一界面不包括所述第二控件时,获取所述第二控件。
结合第四方面,在第四方面的另一种可能的实现方式中,所述处理模块还用于通过SDK表获取所述第二控件,所述SDK表中包括所述文本内容和所述第二控件。
结合第四方面,在第四方面的又一种可能的实现方式中,所述接收模块还用于再次接收用户输入的所述第一语音指令;所述处理模块还用于响应于所述第一语音指令,指示所述显示屏显示所述第一应用的第三界面,所述第三界面包括执行所述第一语音指令对应的操作后输出的服务响应。
结合第四方面,在第四方面的又一种可能的实现方式中,所述处理模块还用于启动所述第二控件,执行所述第一语音指令对应的操作,并指示所述显示屏在所述第一应用的第三界面显示所述服务响应;或者,通过通信模块接收服务器发送的所述服务响应,并指示所述显示屏在所述第一应用的第三界面显示所述服务响应。
结合第四方面,在第四方面的又一种可能的实现方式中,所述处理模块还用于指示所述显示屏在所述第一应用的第二界面显示所述第二控件的控件图标;或者,指示所述显示屏在在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
结合第四方面,在第四方面的又一种可能的实现方式中,所述第一应用的第二界面还包括第三控件的控件图标,所述第三控件用于执行第二语音指令对应的操作,所述第二控件的控件图标为第一颜色,所述第三控件的控件图标为第二颜色,所述第一颜色与所述第二颜色不同。
所述处理模块还用于响应于所述第一语音指令,所述电子设备启动所述第二控件,执行所述第一语音指令对应的操作;以及,响应于所述第二语音指令,所述电子设备向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
第五方面,本申请还提供一种电子设备,该电子设备包括处理器和存储器,且处理器与存储器耦合,此外,还可以包括收发器等。其中存储器用于存储计算机程序指令;处理器用于执行存储器中存储的程序指令,使得该电子设备执行前述第一方面或第二方面的各种实现方式中的方法。收发器用于实现数据传输功能。
另外,该电子设备还包括音频模块、扬声器、受话器、麦克风等。具体地,电子设备的麦克风接收用户输入的唤醒词后,将其传输至音频模块,处理器对音频模块解析的唤醒词进行处理,响应于接收到的所述唤醒词,指示显示屏显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件;处理器还用于接收用户的切换操作,指示所述显示屏显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件;当麦克风接收用户再次输入的所述唤醒词时,处理器用于响应于接收到的所述唤醒词,指示所 述显示屏显示所述第二应用的第二界面,所述第二应用的第二界面中包括所述第一控件和第三控件。
可选的,电子设备的麦克风接收用户输入的唤醒词,处理器响应于接收到的所述唤醒词,指示所述显示屏显示第一应用的第一界面,所述第一界面中包括第一控件;电子设备的麦克风还接收用户输入的第一语音指令;处理器响应于接收到的所述第一语音指令,指示所述显示屏显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件,所述第二控件用于执行所述第一语音指令对应的操作。
第六方面,本申请还提供了一种计算机可读存储介质,该存储介质中存储有指令,使得当指令在计算机或处理器上运行时,可以用于执行前述第一方面以及第一方面各种实现方式中的方法,或者还可以执行前述第二方面以及第二方面各种实现方式中的方法。
另外,本申请还提供了一种计算机程序产品,该计算机程序产品包括计算机指令,当该指令被计算机或处理器执行时,可实现前述第一方面至第二方面的各种实现方式中的方法。
需要说明的是,上述第三方面至第六方面的各种实现方式的技术方案所对应的有益效果与前述第一方面以及第二方面的各种实现方式的有益效果相同,具体参见上述第一方面以及第二方面的各种实现方式中的有益效果描述,不再赘述。
附图说明
图1为本申请实施例提供应用的一种智能设备系统的架构示意图;
图2为本申请实施例提供的一种控件显示方法的流程图;
图3为本申请实施例提供的一种第一应用的第一界面上显示控件的示意图;
图4A为本申请实施例提供的一种在第一应用的第二界面上显示第二控件的示意图;
图4B为本申请实施例提供的一种在第一应用的第二界面上显示提示信息的示意图;
图5为本申请实施例提供的一种根据语音指令跳转到全局响应的示意图;
图6为本申请实施例提供的另一种控件显示方法的流程图;
图7A为本申请实施例提供的一种在第一应用的第二界面中显示第二控件的示意图;
图7B为本申请实施例提供的一种在第二应用的第二界面中显示第三控件的示意图;
图8为本申请实施例提供的一种分布式界面支持所有语音指令的示意图;
图9为本申请实施例提供的一种控件显示装置的结构示意图;
图10为本申请实施例提供的一种电子设备的结构示意图。
具体实施方式
为了使本技术领域的人员更好地理解本申请实施例中的技术方案,下面结合附图对本申请实施例中的技术方案作进一步详细的说明。在对本申请实施例的技术方案说明之前,首先结合附图对本申请实施例的应用场景进行说明。
请参见图1,为本申请实施例应用的一种智能设备系统的架构示意图。该系统可以包括至少一个电子设备。所述电子设备包括但不限于:手机(mobile phone)、平板电脑(Pad)、个人计算机、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、可穿戴设备、电视(TV)、车载终端设备等。例如,图1所示的系统中包括设备101、设备102和设备103,其中设备101为手机,设备102为平板电脑,设备103 为TV。
此外,该系统中还可以包括更多或更少的设备,比如还包括云端服务器104,如图1所示,该云端服务器104分别与设备101、设备102和设备103无线连接,从而实现设备101、设备102和设备103之间的互联。
其中,上述每个电子设备包括有输入输出装置,可用于接收用户通过操作而输入的操作指令,以及向用户展示信息。其中,所述输入输出装置可以是独立的多种装置,例如输入装置可以是键盘、鼠标、麦克风等;输出装置可以是显示屏等。并且所述输入输出装置可以集成在一种设备上,例如触摸显示屏等。
进一步地,所述输入输出装置可以显示用户界面(user Interface,UI),以便与用户进行交互。所述UI是应用程序或操作系统与用户之间进行交互以及信息交换的介质接口,它用于实现信息的内部形式与用户可接受的形式之间的转换。一般地,应用程序的用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析、渲染,最终为用户呈现可识别的内容,比如图片、文字、按钮等控件。
控件(control)也称为部件(widget),是用户界面的基本元素,典型的控件有工具栏(toolbar)、菜单栏(menu bar)、文本框(text box)、按钮(button)、滚动条(scrollbar)、图片和文本。所述控件可以有自己的属性和内容,用户界面中的控件的属性和内容可通过标签或者节点来定义,比如XML通过<Textview>、<ImgView>、<VideoView>等节点来规定界面所包含的控件。一个节点对应用户界面中一个控件或属性,节点经过解析和渲染之后呈现为用户可视的内容。
此外,对于不同的应用程序,比如混合应用(hybrid application)的用户界面中通常还包含有网页。网页,也称为页面,可以理解为内嵌在应用程序的用户界面中的一个特殊的控件,网页是通过特定计算机语言编写的源代码,例如超文本标记语言(hyper text markup language,HTML),层叠样式表(cascading style sheets,CSS),java脚本(JavaScript,JS)等,网页源代码可以由浏览器或与浏览器功能类似的网页显示组件加载和显示为用户可识别的内容。网页所包含的具体内容也是通过网页源代码中的标签或者节点来定义的,比如HTML通过<p>、<img>、<video>、<canvas>来定义网页的元素和属性。
用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),所述GUI是指采用图形方式显示的与电子设备操作相关的用户界面。它可以是在电子设备的显示屏中显示的一个窗口、控件等界面元素。本实施例中,所述控件的展示形式包括图标、按钮、菜单、选项卡、文本框、对话框、状态栏和导航栏等各种可视的界面元素。
本实施例中,利用集成开发环境(Integrated Development Environment,IDE)开发并生成控件,其中,IDE是在一个公共环境中集成了编辑、设计和调试等多种功能,从而为开发人员快速、方便地开发应用程序提供了强有力的支持。IDE中主要包括菜单、工具栏和一些窗口。其中,所述工具栏可用于向窗体添加控件。所述窗体是一小块屏幕区域,通常为矩形,可用来向用户显示信息并接受用户的输入信息。
实施例一
本实施例提供一种控件显示方法,通过在显示屏中添加虚拟语音控件的方式向用户提供公共的语音交互能力,从而提高用户满意度。其中,本方法可应用于前述任意一种电子 设备中,具体地,如图2所示,所述方法包括:
101:当电子设备接收到用户输入的唤醒词时,在所述电子设备显示屏的第一应用的第一界面中显示至少一个控件。其中所述第一界面可以是当前界面。
具体地,当电子设备获取用户输入的唤醒词时,会自动进入指令输入状态,等待用户下达语音指令。其中所述唤醒词可以是预定义的唤醒词,比如小艺小艺,小爱同学等等,或者还包括泛化唤醒词,比如在电子设备的摄像头中采集到用户的注意力集中在当前显示屏时,或者,在用户与电子设备进行语音交互情况下,检测到符合预设语音指令集的语音指令时,都可以唤醒该电子设备,并进入语音指令输入状态。
当电子设备被唤醒时点亮第一应用的第一界面时,并在第一界面中显示其所支持的至少一个控件,所述至少一个控件中包括第一控件。可选的,一种可能的实现方式是,所述第一控件为电子设备被唤醒时所述当前界面中显示的控件,如图3所示,当前界面中显示的第一控件包括以下任意一种:播放/暂停
Figure PCTCN2021106385-appb-000001
31、开启/关闭弹幕
Figure PCTCN2021106385-appb-000002
32、发弹幕
Figure PCTCN2021106385-appb-000003
33、倍速“倍速”34、退出“←”35等。其中,第一控件的展示形式可以是图标、按钮、菜单、选项卡、文本框等,本实施例中以图标形式展示控件为例进行说明。当所述第一应用为一种视频播放应用时,比如华为视频、腾讯视频,所述第一界面为视频播放界面。当所述第一应用为一种电子书应用时,所述第一界面为文本浏览界面。
可选的,另一种可能的实现方式是,所述第一控件为常用控件或各种应用中通用的控件,例如,所述常用控件可以是“播放/暂停”控件,或者所述常用控件为一个虚拟组件集中的任一个控件。
102:接收用户下达的第一语音指令。具体地,电子设备通过麦克风接收所述第一语音指令。
可选的,第一语音指令指示用户期望当前界面响应的服务,例如,当前界面为视频播放界面时,第一语音指令为“2倍速播放”,或者,当前界面为文本浏览界面时,第一语音指令为“放大”。
可选的,当用户不清楚当前界面可以响应哪些语音指令时,第一语音指令也可以是“呈现控件图标”等。
103:获得所述第一语音指令对应的文本内容,判断所述第一应用的第一界面是否包括第二控件。
其中,所述文本内容对应于所述第二控件,且所述第二控件用于执行所述第一语音指令对应的操作。具体地,遍历所述第一应用的第一界面中的所有控件,判断第一界面中是否有执行所述第一语音指令的第二控件,例如判断第一界面中的第一控件能否执行“2倍速播放”的操作。例如图3所示,在第一界面所包含的所有控件的控件图标为:播放/暂停
Figure PCTCN2021106385-appb-000004
31、开启/关闭弹幕
Figure PCTCN2021106385-appb-000005
32、发弹幕
Figure PCTCN2021106385-appb-000006
33、倍速“倍速”34、退出“←”35,查找是否有能执行“2倍速播放”操作的控件。
104:如果是,则启动所述第二控件并执行所述第一语音指令对应的操作,为用户提供服务。
可选的,还包括:将对应的服务响应反馈给用户。例如,当用户下达的第一语音指令为“2倍速播放”时,电子设备查询到第一界面中的控件能够提供“2倍速播放”的功能,则对应地启动该控件并执行“2倍速播放”操作,并将服务响应显示在当前界面中。
105:如果否,则获取第二控件,并将所述第二控件显示在所述第一应用的第二界面中。
具体地,一种确定第二控件的实现方式是,通过软件开发工具包(software development kit,SDK)表查找第一语音指令对应的第二控件。所述SDK是一些用于为特定的软件包、软件框架、硬件平台、操作系统等创建应用软件的开发工具的集合,一般地,所述SDK为开发Windows平台下的应用程序所使用的SDK。它不仅能为程序设计语言提供应用程序接口(Application Program Interface,API)的必要文件,还能够与某种嵌入式系统通讯。本实施例中,所述SDK表中包括至少一个语音指令的文本内容与至少一个控件之间的对应关系,所述控件可通过控件图标表示。如表1所示,一个SDK表中可以包括但不限于以下对应关系:播放/暂停、下一集、开启/关闭弹幕、发弹幕、倍速、退出。
其中,所述SDK表可以预先存储在电子设备中,或者电子设备从云端服务器获得。可选的,所述SDK表可以实时地更新,并周期性地被电子设备获取,从而为用户提供丰富的语音服务功能。
表1、SDK表
Figure PCTCN2021106385-appb-000007
另一种确定所述第二控件的实现方式,包括:
105-1:获取第一虚拟组件集和第二虚拟组件集。其中,第一虚拟组件集包括在所述电子设备接收唤醒词进入指令输入状态时,第一应用的第一界面中显示的一个或多个控件。所述第二虚拟组件集包括预设的至少一个控件,且所述第二虚拟组件集中包含的所有控件个数大于等于第一虚拟组件集中的控件个数,并且所述第二虚拟组件集与第一应用的第一界面类型相关联。可选的,所述第一界面的第一界面类型包括:视频播放、音乐播放、图片/照片预览、文本浏览等。
在第二虚拟组件集中包括第二控件,所述第二控件可以是一种常用控件,比如当在上述步骤102中用户下达的第一语音指令为“播放下一集”时,且“下一集”控件属于视频播放这种界面类型的虚拟组件集中的一个语音控件,则确定第二虚拟组件集为该视频播放界面所对应虚拟组件集。
105-2:确定所述第二控件,所述第二控件属于所述第二虚拟组件集中,但不属于所述第一虚拟组件集中。其中,所述第二控件的个数可以是一个或者多个。
例如,第一虚拟组件集中仅包括“播放/暂停”一个控件,在第二虚拟组件集中包括:播放/暂停、下一集、开启/关闭弹幕、发弹幕、倍速和退出共6个控件,由此确定所述第二控件有除了“播放/暂停”控件之外的其他所有控件,本示例中第二控件包括:下一集、开启/关闭弹幕、发弹幕、倍速和退出。
步骤105中,电子设备从SDK表中确定第二控件后,将所述第二控件所对应的控件图标添加到第一应用的第二界面上。同理地,如果电子设备根据虚拟组件集确定出多个第二控件,则将所有第二控件都显示到所述第二界面中。
具体地,例如图4A所示,确定所述第二控件为“下一集”时,将该“下一集”控件对应的控件图标
Figure PCTCN2021106385-appb-000008
36显示在当前的视频播放界面(即第二界面)。此外,所述第二界面中还包括第一控件,本示例中第一控件所对应的控件图标包括:
Figure PCTCN2021106385-appb-000009
31、
Figure PCTCN2021106385-appb-000010
32、
Figure PCTCN2021106385-appb-000011
33、倍速34和←35。
可选的,另外还包括:将第二控件所对应的提示信息也添加到所述第二界面中。每个控件对应一个提示信息,且每个提示信息用于提示该控件对应的语音功能,用户可根据该提示信息下达对应的语音指令,当电子设备接收到与包含该提示信息的语音指令时,根据所述对应关系启动该语音指令所对应的控件。例如图4B所示,当用户输入的第一语音指令为“播放下一集”时,查询图4A所示第一界面中未包括执行跳转到“下一集”操作对应的语音控件,并根据SDK表确定出第二控件为“下一集”时,则将该第二控件所对应的控件图标
Figure PCTCN2021106385-appb-000012
36,以及该第二控件对应的提示信息“下一集”361一并添加到图4B所示的第二界面中,以便向用户展示当前界面支持播放“下一集”的语音指令。
可选的,所述提示信息与控件之间的对应关系可以存储在上述SDK表中,或者单独存储,本实施例对此不予限制。另外,所述提示信息与所述文本内容可以相同,也可以不相同。用户再次下达的第一语音指令中可以包含除了所述提示信息之外的更多语音内容,本实施例对此不进行限制。
可选的,在显示所述第二控件时,可将所述第二控件对应的控件图标,以及所述提示信息一并显示在当前界面的空白区域,或者,还可以利用悬浮窗的形式添加在当前界面中,本实施例对具体的添加方式不进行限制。
本实施例提供的方法,当电子设备被用户唤醒后,电子设备可将用户下达的任何语音指令对应的控件,都被显示到电子设备的当前界面中,从而为用户再次下达语音指令时提供相应的服务,避免因不同应用界面上的控件不同,导致用户下达的语音指令在电子设备的当前界面中无法被执行的缺陷,本方法利用SDK表或虚拟组件集拓展了电子设备的语音控件功能,实现对第二控件的自动添加和显示,增强了对语音文本内容的服务功能,提高了用户的满意度。
此外,上述方法还包括:
106:启动所述第二控件并执行所述第一语音指令对应的操作,为用户提供语音服务。
具体地,一种可能的实现是,在利用所述SDK表在所述第一应用的第二界面上显示第二控件后,直接启动该第二控件,并执行第一语音指令的文本内容对应的操作,并输出服务响应。
另一种可能的实现是,当电子设备再次获取用户下达的第一语音指令时,所述第一语音指令对应的文本内容可以包含第二控件对应的提示信息,或与所述第二控件对应的提示信息相同,启动所述第二控件,执行所述第一语音指令对应的操作,并输出服务响应。例如,当接收到用户再次下达的“播放下一集”(或者“下一集”)的语音指令时,解析该语音指令得到文本内容包括“下一集”,启动所述“下一集”控件,执行播放“下一集”的操作从而为用户提供语音服务。
可选的,在上述启动所述第二控件,并输出服务响应的过程中,具体包括:
106-1:检测所述电子设备的第二界面中的第二控件能否执行第一语音指令对应的操作,即判断第二控件能否为所述第一语音指令提供功能服务。
106-2:如果否,即第二控件不能提供所述功能服务,则所述电子设备可通过云端服务器或其他电子设备获得服务响应,该服务响应由云端服务器或其他电子设备执行所述第一语音指令对应的操作后生成,并传输至电子设备,所述电子设备接收后并将该服务响应显示在显示屏中。比如,电子设备接收到用户下达的第二语音指令为“放大图片”后,在电子设备的第二界面的第二控件不支持“放大图片”的功能情况下,则会将原始图片发送至云端服务器或第二电子设备,由该云端服务器或第二电子设备对原始图片做放大处理。可选的,云端服务器还可以将该原始图片发送至其他具备“放大图片”功能的电子设备,并获取放大后的图片,最后将该放大后的图片发送给所述电子设备。
本方法利用云端服务器来为电子设备提供服务响应,还避免了在该电子设备本地对第二控件的软件开发,节约了软件开发成本。
106-3:如果是,即第二控件能够提供所述功能服务,则启动所述第二控件,执行所述第一语音指令对应的操作,并输出服务响应。具体过程与前述步骤104相同,不再赘述。
可选的,在上述步骤105之后,该方法还包括:当电子设备接收用户下达的第二语音指令,获得所述第二语音指令对应的文本内容,且所述第二语音指令对应的文本内容与上述步骤102中第一语音指令的文本内容不同,但所述第二语音指令对应的文本内容与步骤105中已添加的一个第二控件的文本内容相同,则启动该已添加的第二控件,并执行所述第二语音指令的文本内容对应的操作,输出对应的服务响应。
例如,第二语音指令为“关闭弹幕”,且该“关闭弹幕”与第一语音指令的“播放下一集”不同,在所述电子设备的第二界面中已添加“关闭弹幕”的第二控件情况下,则启动该第二控件,并执行“关闭弹幕”的操作,并通过当前界面显示给用户。
应理解,执行所述第二语音指令的文本内容的控件可以是上述通过虚拟组件集确定的多个第二控件的一个,或者也可以是电子设备的原来本地所包含的多个控件中的一个,本实施例对此不予限制。
可选的,在获取或确定第二控件之后,方法还包括:电子设备将本地能够提供语音服务的控件和本地不能提供语音服务的控件进行区分展示,例如将本地支持的控件用第一颜色(比如绿色)显示,将本地不支持的控件用第二颜色(比如红色)显示在第二界面中,从而方便用户识别和区分,应理解,还可以采用其他方式进行区分,例如标识等,本实施例对具体的区分方式不做限定。本地不支持的控件即如上文所述的,该控件对应的语音指令需要通过云端服务器或其他电子设备获得服务响应。
本实施例提供的方法,当电子设备添加完第二控件后,通过调用云端服务器可实现所有第二控件的功能,增强了电子设备的服务能力,从而为用户提供当前界面展示的所有语音控件功能,提高了用户满意度。
此外,在步骤105中,显示的第二控件所对应的服务响应可包括界面响应和全局响应,相应地,在上述步骤104和/或106中,电子设备输出的服务响应包括界面响应和全局响应,具体地,所述界面响应,是指电子设备在执行某一操作时不需要从当前第一应用跳转到第二应用,在当前第一应用的界面就可以完成。例如上述“播放下一集”、“关闭弹幕”、“放 大图片”等操作。
所述全局响应,是指电子设备在执行某一操作时需要从当前第一应用跳转到第二应用,并通过第二应用的界面提供服务响应。比如图5所示,一种可能的实现方式包括:第一应用的界面为图片预览界面,当用户下达“音乐播放”的语音指令时,根据上述步骤103和105的描述先确定出需要添加控件是“音乐播放”控件,然后在图片预览界面中添加该“音乐播放”的控件图标
Figure PCTCN2021106385-appb-000013
再跳转到“音乐播放”所对应的应用界面,比如第二应用,此时第二应用的界面为音乐播放界面,最后按照上述步骤106的描述,直接或者再次接收到用户输入的语音指令时启动该“音乐播放”控件并执行音乐播放指令所对应的操作,从而以为用户提供音乐播放的功能。其中,所述“音乐播放”的语音指令为一种切换指令,当电子设备接收到该切换指令后执行界面切换操作。
其中,上述第一应用或第二应用的界面包括:视频播放、音乐播放、图片/照片预览、文本浏览、拨号和发信息等界面。
可选的,对于界面响应,用户下达的语音指令可称为界面语音;对于全局响应,用户下达的语音指令可称为全局语音。
可选的,上述“音乐播放”的控件可通过悬浮窗的形式展示在图片预览的应用界面上,所述悬浮窗中可以展示音乐列表、歌曲名、播放/暂停等控件。另外,在视频播放的应用界面中,列表中还可以展示节目清单,比如当前所有TV频道正在直播的节目清单。
实施例二
本实施例还提供另一种控件显示方法,与前述实施例一的区别在于,在用户下达第一语音指令之前,本实施例已经确定第二控件,并将第二控件显示在电子设备的应用界面中,以便为用户提供丰富的服务响应。
其中,在电子设备的显示屏中显示第一应用的第一界面,所述第一界面中包括第一控件,如图6所示,方法包括:
201:电子设备接收用户输入的唤醒词。
202:响应于接收到的所述唤醒词,显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件。具体地,包括:
202-1:响应于接收到的所述唤醒词,获取第一虚拟组件集和第二虚拟组件集。
所述第一虚拟组件集与第一应用的第一界面相关联,包括在电子设备被唤醒时第一界面中显示的一个或多个控件。如图7A所示,电子设备的第一应用为电子书APP应用时,第一界面中显示的控件有:退出“←”71、下载
Figure PCTCN2021106385-appb-000014
72、留言栏
Figure PCTCN2021106385-appb-000015
73、目录
Figure PCTCN2021106385-appb-000016
74、护眼亮度
Figure PCTCN2021106385-appb-000017
75、语音朗读
Figure PCTCN2021106385-appb-000018
76、阅读设置“Aa”77,由这些控件组成的集合为所述第一虚拟组件集。
所述第二虚拟组件集与第一应用的第一界面类型相关联,所述第一界面类型为文本浏览,所述文本浏览所对应的虚拟组件集中包含至少一个常用控件。所述常用控件可以包括第一虚拟组件集中的所有控件,并且第二虚拟组件集中包含的所有控件个数大于等于第一虚拟组件集中的控件个数。可选的,所述常用控件可利用SDK创建和添加。
具体地,一种获取所述第二虚拟语音组件集的方法是:电子设备根据第一应用的第一界面类型,其中每种应用的界面类型与一个虚拟组件集之间存在着对应关系,如下表2所示,电子设备利用该对应关系可确定与当前应用的界面类型所对应的虚拟组件集,即为所 述第二虚拟组件集。
表2、界面类型与虚拟组件集之间的对应关系
Figure PCTCN2021106385-appb-000019
由表2所示,每个界面类型对应一个虚拟组件集。例如,文本浏览界面对应“虚拟组件集4”,当确定该虚拟组件集4为所述第二虚拟控件集时,所述第二虚拟控件集中包括该虚拟组件集4中的所有控件。
可选的,还可以将上述对应关系与实施例一的SDK表相结合形成新的关系表,该新的关系表中包括界面类型、虚拟组件集,每个虚拟组件集中所包含的控件图标、以及每个控件所对应的提示信息等内容。
202-2:根据所述第一虚拟组件集和所述第二虚拟组件集确定第二控件,所述第二控 件是指属于所述第二虚拟组件集中的,但不属于所述第一虚拟组件集中的控件。
202-3:将所有所述第二控件显示在所述第一应用的第二界面中。
其中,在显示所述第二控件时,可将每个第二控件所对应的控件图标和提示信息一并显示在所述第二界面的空白区域中;或者,以悬浮窗的方式显示。其中,所述空白区域可理解为没有被控件覆盖的区域。或者,还可以在没有空白区域的情况下,通过缩小或挪动界面中现有的控件图标的大小以便腾出一块空白区域,再将所述控件图标和提示信息显示在所述空白区域中,本实施例对所述第二控件显示的位置和方式不予限制。
在如图7A所示实例中,在第一应用为电子书APP应用的情况下,该电子书APP应用显示第一界面,电子设备根据第一应用的第一界面获取第一虚拟组件集,所述第一虚拟组件集中包括如下控件:退出“←”71、下载
Figure PCTCN2021106385-appb-000020
72、留言栏
Figure PCTCN2021106385-appb-000021
73、目录
Figure PCTCN2021106385-appb-000022
74、护眼亮度
Figure PCTCN2021106385-appb-000023
75、语音朗读
Figure PCTCN2021106385-appb-000024
76、阅读设置“Aa”77。确定所述第一界面的界面类型为“文本浏览”界面,根据该“文本浏览”界面获取所述第二虚拟组件集,所述第二虚拟组件集对应表2中的“虚拟组件集4”,查表2中的“虚拟组件集4”可获得如下控件:退出“←”71、下载
Figure PCTCN2021106385-appb-000025
72、留言栏
Figure PCTCN2021106385-appb-000026
73、目录
Figure PCTCN2021106385-appb-000027
74、护眼亮度
Figure PCTCN2021106385-appb-000028
75、语音朗读
Figure PCTCN2021106385-appb-000029
76、阅读设置“Aa”77、上一章“上一章”78和下一章“下一章”79。根据所述第一虚拟组件集和所述第二虚拟组件集确定待添加的第二控件为“上一章”78和“下一章”79,则将该第二控件所对应的控件图标“上一章”78和“下一章”79添加到第一应用的第二界面中。
另外,方法还包括:启动所述第二控件,执行第二控件所对应的操作,并输出服务响应。具体的执行过程与实施例一的“步骤106”相同。
203:接收用户的切换操作,在所述显示屏上显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件。所述切换操作对应一种全局响应。
所述切换操作可以是用户手动切换,或者通过用户输入的语音指令来启动切换操作,例如当用户下达“播放新闻”的语音指令时,电子设备接收并解析该语音指令,执行切换界面的操作。如图7B所示,将电子书APP应用切换到第二应用,所述第二应用为视频播放应用,该视频播放应用的界面中包括第一控件,如图7B所示,在第二应用的第一界面上包括以下第一控件:播放/暂停
Figure PCTCN2021106385-appb-000030
31、开启/关闭弹幕
Figure PCTCN2021106385-appb-000031
32、发弹幕
Figure PCTCN2021106385-appb-000032
33、倍速“倍速”34、退出“←”35,其中,退出控件“←”35与第一应用的第一界面中的控件“←”71相同。
204:接收用户再次输入的所述唤醒词。
205:响应于接收到的所述唤醒词,显示所述第二应用的第二界面,所述第二应用的第二界面中包括所述第一控件和第三控件。其中,所述第二应用的第一界面类型所对应的组件集合为第三虚拟组件集。
在如图7B的示例中,当用户再次输入唤醒词,比如“小艺小艺”时,电子设备确定当前视频播放应用对应的界面类型为“视频播放”界面,根据上述表2查找“视频播放”界面对应“虚拟组件集1”,该虚拟组件集1中包括以下控件:播放/暂停、下一集、开启/关闭弹幕、发弹幕、倍速和退出。与第二应用的第一界面上所包含的控件相比,确定所述第三控件为“下一集”,将该“下一集”的控件图标
Figure PCTCN2021106385-appb-000033
36添加在第二应用的第二界面上,具体的添加过程与前述实施例一相同,本实施例对此不再赘述。
另外,上述方法还包括:启动第三控件,执行该第三控件所对应的操作,并在第二应 用的界面上显示输出的服务响应。比如,当用户下达“播放下一集”的语音指令时,启动控件“下一集”36并执行“播放下一集”的语音指令操作,然后将下一集的视频内容显示在第二应用的界面上。具体地过程可参见实施例一的步骤106,此处不再赘述。
可选的,当电子设备接收到用户下达的其他语音指令,该语音指令对应第二应用的第二界面上的第四控件,则启动该第四控件,执行当前用户下达的语音指令所对应的操作,并将响应结果显示在第二应用的当前界面上。所述第四控件可以是播放/暂停
Figure PCTCN2021106385-appb-000034
31、开启/关闭弹幕
Figure PCTCN2021106385-appb-000035
32、发弹幕
Figure PCTCN2021106385-appb-000036
33、倍速“倍速”34、为退出“←”35中的任意一个。
此外,电子设备还可以通过不同颜色或标识来区分设备能够提供语音服务和不能提供语音服务的控件,并且对于电子设备本地不能提供语音服务的控件,可借助云端服务器来实现控件功能,从而为用户提供丰富的语音服务功能。更具体的区分和调用能力过程可参见上述实施例一,此处不再赘述。
本实施例提供的方法,设置每个界面类型所对应的虚拟组件集,并将其与当前界面中所包含的控件进行比较,确定出当前界面中缺少的但是是常用的控件,并将这些控件自动添加在电子设备的当前界面中,比如当电子设备接收到用户输入的唤醒词时,在第一应用的当前界面上自动添加并显示当前界面中没有的第二控件,实现了对第一应用所关联的第二控件的自动添加和显示,从而保证同一应用上展示相同的语音控件。比如利用本方法实现了在不同电子书应用中的界面中都显示“上一章”和“下一章”的语音控件,从而方便用户进行语音交互,提高了用户体验。
此外,当第一应用切换到第二应用时,在第二应用的当前界面上自动添加并显示第三控件,从而实现用户在切换应用时,根据当前应用的界面类型在电子设备的显示屏上能够显示该界面类型所对应的所有控件,比如将电子书应用切换到视频播放应用时,可在视频播放的界面上自动添加并显示当前界面中缺少的“下一集”的语音控件,从而实现了电子设备的显示屏上显示不同应用所关联的所有语音控件,进而增强了电子设备的语音服务功能,提高了用户的满意度。
另外,为了提高控件与用户之间的语音交互效率,除了显示第二控件、第三控件之外,还显示新添加控件对应的提示信息,比如,当新添加一个“搜索”控件时,在该“搜索”控件的搜索框中可以包括以下提示:
提示1:在搜索框中或搜索框外显示文字或者出现悬浮的注解文字,例如“请说出搜索的内容,比如第100个元素,好看的笔”等,并高亮显示这些注释文字。
提示2:在搜索框中或搜索框外显示文字或者出现悬浮的注解文字,搜索文字可以是泛化信息,比如“搜索图片,搜索信息”,或者也可以是热词,比如“吐槽大会综艺”,“新冠病毒”等。
当用户根据上述提示说出搜索语音内容后,电子设备自动用预置的文字快速进行搜索,在数据库中查找结果,并输出服务响应。
另外,在上述实施例一和实施例二中,还包括:自动创建和更新控件集,从而为不同的电子设备提供丰富的语音服务功能。
具体地,一种可能实施方式是,利用IDE开发和创建各种语音控件,如图8所示,在一语音环境中包括手机、TV、车机等设备,每个设备中包含的语音控件不同,各自所支持的语音控件的功能也不同。例如,手机终端的虚拟组件集所能够提供服务的语音指令包括 {A、B、C、D};TV的虚拟组件集所支持的语音指令包括{A、B、C、D、E、F、G};车机的虚拟组件集所支持的语音指令包括{F}。此外,还包括:系统预定义的常用虚拟组件集,比如利用SDK在IDE环境下开发的语音控件能够支持的语音指令包括{A、B、C、D、E、F、G},涵盖多个设备的分布式界面中的所有语音指令。
为了提高任一电子设备的服务功能,通过SDK集成的所有语音指令的虚拟组件集,对手机、TV和车机等设备添加至少一个目标控件,从而保证每个设备都具备执行所有语音指令的能力,提高用户体验。
比如对于手机,比较在分布式界面中存储的语音指令和手机支持的语音指令后,确定将语音指令{E、F、G}所对应的控件添加在手机的应用界面中,使得手机可以执行语音指令A至G的所有操作。同理地,对于TV,由于其当前所存储的语音指令与SDK中的语音指令种类相同,即TV具备执行所有语音指令的操作,所以无需添加新控件。对于车机,则需要添加其缺少的语音指令{B、C、D、E、F、G}所对应的控件。具体地添加相应控件的方法与前述实施例的方法相同,不详细赘述。
本实施例中,利用IDE创建和开发新的虚拟组件集,包涵能够执行分布式界面中的所有语音指令的语音控件,并将这些控件自动地添加在不同的电子设备中,从而增强电子设备的语音服务能力。另外每个电子设备还支持远程语音能力调用,比如从云端服务器获得目标控件的服务响应,从而避免在每个电子设备本地都进行新添加控件的二次开发,节约了软件开发成本。
需要说明的是,上述实施例所述的虚拟组件集又称为“组件集合”,比如所述第二虚拟组件集可称为“第一组件集合”,所述第三虚拟组件集可称为“第二组件集合”。所述第一组件集合与第一应用的第一界面类型具有关联关系,所述第二组件集合与第二应用的第一界面类型具有一定关联关系,所述第一界面类型包括但不限于视频播放、音乐播放、图片/照片预览、文本浏览等。另外,所述第一应用和所述第二应用可以是视频播放、语音播放、图片/照片预览等应用APP。
下面介绍与上述方法实施例对应的装置实施例。
图9为本申请实施例提供的一种控件显示装置的结构示意图。所述装置可以是一种电子设备,或位于所述电子设备中的一个部件,例如芯片电路。并且,该装置可以实现前述实施例中的控件添加方法。
具体地,如图9所示,该装置可以包括:接收模块901、处理模块902。此外,所述装置还可以包括通信模块、存储单元等其他的单元或模块,所述通信模块和存储单元在图9中未示出。另外,该装置还包括显示屏,所述显示屏用于显示至少一个控件。
其中,接收模块901用于接收用户输入的唤醒词;处理模块902用于响应于接收到的所述唤醒词,指示显示屏显示第一应用的第二界面,其中第二界面中包括:第一控件和第二控件。处理模块902还用于接收用户的切换操作,指示所述显示屏显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件;接收模块901还用于接收用户再次输入的所述唤醒词;处理模块902还用于响应于接收到的所述唤醒词,指示所述显示屏显示第二应用的第二界面,其中所述第二应用的第二界面中包括:第一控件和第三控件。
可选的,在本实施例的一种具体的实施方式中,处理模块902还用于显示所述第一应用的第二界面之前,根据第一应用的第一界面类型获取第一组件集合。其中所述第一组件 集合包括所述第二控件。
可选的,在本实施例的另一种具体的实施方式中,处理模块902还用于显示第二应用的第二界面之前,根据第二应用的第一界面类型获取第二组件集合。其中所述第二组件集合包括所述第三控件。
可选的,处理模块902可从存储单元中获取所述第一组件集合和所述第二组件集合。
可选的,所述第一应用的第二界面中还包括:与所述第二控件相对应的提示信息。
可选的,在本实施例的又一种具体的实施方式中,处理模块902还用于响应于第一语音指令,指示显示屏显示所述第一应用的第三界面,所述第三界面中包括执行所述第一语音指令对应的操作后输出的服务响应。
可选的,在本实施例的又一种具体的实施方式中,处理模块902还用于启动所述第二控件,执行所述第一语音指令对应的操作,并指示在所述第一应用的第三界面显示所述服务响应;或者,通过通信模块接收服务器发送的所述服务响应,并指示在所述第一应用的第三界面显示所述服务响应。其中,所述通信模块具有数据收发功能。
可选的,在本实施例的又一种具体的实施方式中,处理模块902还用于指示在所述第一应用的第二界面显示所述第二控件的控件图标;或者,指示在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
可选的,所述第一应用的第二界面还包括第四控件的控件图标,所述第四控件用于执行第二语音指令对应的操作,所述第二控件的控件图标为第一颜色,所述第四控件的控件图标为第二颜色,并且第一颜色与第二颜色不同。
处理模块902还用于响应于第一语音指令,启动所述第二控件,执行所述第一语音指令对应的操作;以及,响应于所述第二语音指令,通过通信模块向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
此外,处理模块902还用于指示显示屏显示第一服务响应或第二服务响应,所述第一服务响应处理模块902执行所述第一语音指令对应的操作后输出的服务响应;所述第二服务响应为接收的来自所述服务器发送的服务响应,且该服务响应由所述服务器执行所述第二语音指令后输出。
可选的,在本实施例中,接收模块901用于接收用户输入的唤醒词;处理模块902用于响应于接收到的所述唤醒词,指示所述显示屏显示第一应用的第一界面,其中所述第一界面中包括第一控件。
接收模块901还用于接收用户输入的第一语音指令;处理模块902还用于响应于接收到的所述第一语音指令,指示所述显示屏显示所述第一应用的第二界面,其中所述第二界面中包括所述第一控件和第二控件,所述第二控件用于执行所述第一语音指令对应的操作。
可选的,在本实施例的一种具体的实施方式中,处理模块902还用于指示显示屏显示所述第一应用的第二界面之前,获得所述第一语音指令对应的文本内容,所述文本内容对应于所述第二控件;当所述第一应用的第一界面不包括所述第二控件时,获取所述第二控件。
可选的,在本实施例的另一种具体的实施方式中,处理模块902还用于通过SDK表获取所述第二控件,所述SDK表中包括所述文本内容和所述第二控件。
可选的,在本实施例的另一种具体的实施方式中,接收模块901还用于再次接收用户输入的所述第一语音指令;处理模块902还用于响应于所述第一语音指令,指示所述显示屏显示所述第一应用的第三界面,所述第三界面包括执行所述第一语音指令对应的操作后输出的服务响应。
另外,在一种硬件实现中,本实施例还提供了一种电子设备,图10示出了一种电子设备的结构示意图。该设备包括处理器110和存储器120,此外,还包括:USB接口130,电源管理模块140,电池141,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键191,摄像头192,显示屏193等。
应理解,本实施例示意的结构并不构成对电子设备的具体限定。在本申请另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
其中,处理器110可以由集成电路(Integrated Circuit,IC)组成,例如可以由单颗封装的IC所组成,也可以由连接多颗相同功能或不同功能的封装IC而组成。举例来说,处理器110可以包括中央处理器(central processing unit,CPU)或数字信号处理器(Digital Signal Processor,DSP)等。
此外,处理器110还可以包括硬件芯片。该硬件芯片可以是专用集成电路(application specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD),现场可编程逻辑门阵列(field-programmable gate array,FPGA),通用阵列逻辑(generic array logic,GAL)或其任意组合。
在一些实施例中,处理器110可以包括一个或多个接口。所述接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,SIM接口和/或通用串行总线(universal serial bus,USB)接口等。
存储器120,用于存储和交换各类数据或软件,包括SDK表,第一语音指令,第二语音指令,第一语音指令和第二语音指令所对应的文本内容,第一虚拟组件集、第二虚拟组件集、控件图标等,还用于存储音频、视频、图片/照片等文件。此外,存储器120中可以存储有计算机程序指令或代码。
具体地,存储器120可以包括易失性存储器(volatile memory),例如随机存取内存(Random Access Memory,RAM);还可以包括非易失性存储器(non-volatile memory),例如只读存储记忆体(read only memory,ROM)、快闪存储器(flash memory),硬盘(Hard Sisk Drive,HDD)或固态硬盘(Solid-State Drive,SSD),存储器120还可以包括上述种类的存储器的组合。
显示屏193可用于显示第一控件、第二控件、第三控件所对应的控件图标和提示信息,显示不同的应用界面,例如第一应用的第一界面和第二界面,第二应用的第一界面和第二 界面等。此外,显示屏193还可以显示图片、照片、文本信息,播放视频/音频等媒体流等。
具体的,显示屏193可包括显示面板和触控面板。其中显示面板可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板。触控面板也称为触摸屏、触敏屏等。在一些实施例中,电子设备100可以包括1个或N个显示屏193,N为大于1的正整数。
音频模块170、扬声器170A,受话器170B,麦克风170C可实现用户与电子设备之间的语音交互。其中音频模块170中包括音频电路,可将接收到的音频数据转换后的信号,传输到扬声器170A,由扬声器170A转换为声音信号输出。
麦克风170C用于接收用户输入的声音信号,比如唤醒词、第一语音指令、第二语音指令等,将该接收的声音信号转换为电信号,再传输至音频模块170,音频模块170接收后将电信号转换为音频数据,再将音频数据输出至处理器110做进一步处理,得到语音指令对应的文本内容。
传感器模块180可以包括至少一个传感器,比如压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,触摸传感器,指纹传感器等等。
按键191包括开机键,音量键等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备充电,也可以用于电子设备与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如虚拟现实设备等。
电源管理模块140用于连接电池141与处理器110。电源管理模块140为处理器110,存储器120,显示屏193,摄像头192,移动通信模块150和无线通信模块160等供电。在一些实施例中,电源管理模块140可以设置于处理器110中。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器(或基带芯片)等实现。天线1和天线2用于发射和接收电磁波信号。电子设备中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块150可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。无线通信模块160可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,WiFi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,电子设备的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通 信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS)。
在本实施例中,当所述电子设备作为一种控件显示装置时,可以实现前述图2或图6所示的方法,并且前述图9所示装置中,接收模块901的功能可以由音频模块170或者音频模块170中的麦克风170C实现,处理模块902的功能可以由处理器110和显示屏193等部件来实现;所述存储单元的功能可以由存储器120实现。
此外,本申请实施例还提供了一种系统,该系统包括至少一个所述电子设备,以及还可以包括服务器,比如云端服务器,用于实现前述实施例中的控件显示方法。其中,所述服务器的结构可以与图10所示的电子设备的结构相同,也可以不同,本实施例对此不予限制。
此外,本申请实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时可包括本申请提供的控件添加方法的部分或全部步骤。所述的存储介质包括但不限于磁碟、光盘、ROM或RAM等。
在上述实施例中,可以全部或部分通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机程序指令,在计算机加载和执行所述计算机程序指令时,全部或部分地产生按照本申请上述各个实施例所述方法流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输。
此外,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
以上所述的本申请实施方式并不构成对本申请保护范围的限定。

Claims (17)

  1. 一种控件显示方法,其特征在于,应用于一种电子设备,所述电子设备包括显示屏,在所述显示屏上显示第一应用的第一界面,所述第一界面中包括第一控件,所述方法包括:
    接收用户输入的唤醒词;
    响应于接收到的所述唤醒词,显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件;
    接收用户的切换操作,在所述显示屏上显示第二应用的第一界面,所述第二应用的第一界面中包括所述第一控件;
    接收用户再次输入的所述唤醒词;
    响应于接收到的所述唤醒词,显示所述第二应用的第二界面,所述第二应用的第二界面中包括所述第一控件和第三控件。
  2. 根据权利要求1所述的方法,其特征在于,响应于接收到的所述唤醒词,显示所述第一应用的第二界面之前,还包括:
    根据所述第一应用的第一界面类型获取第一组件集合,所述第一组件集合包括所述第二控件。
  3. 根据权利要求1或2所述的方法,其特征在于,响应于接收到的所述唤醒词,显示所述第二应用的第二界面之前,还包括:
    根据所述第二应用的第一界面类型获取第二组件集合,所述第二组件集合包括所述第三控件。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述第一应用的第二界面中还包括:与所述第二控件相对应的提示信息。
  5. 根据权利要求1-4任一项所述的方法,其特征在于,还包括:
    响应于第一语音指令,显示所述第一应用的第三界面,所述第三界面中包括执行所述第一语音指令对应的操作后输出的服务响应。
  6. 根据权利要求5所述的方法,其特征在于,响应于第一语音指令,显示所述第一应用的第三界面,包括:
    启动所述第二控件,执行所述第一语音指令对应的操作,并在所述第一应用的第三界面显示所述服务响应;
    或者,所述电子设备接收服务器发送的所述服务响应,并在所述第一应用的第三界面显示所述服务响应。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,显示所述第一应用的第二界面,包括:
    在所述第一应用的第二界面显示所述第二控件的控件图标;
    或者,在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
  8. 根据权利要求7所述的方法,其特征在于,所述第一应用的第二界面还包括第四控件的控件图标,所述第四控件用于执行第二语音指令对应的操作;
    所述第二控件的控件图标为第一颜色,所述第四控件的控件图标为第二颜色,所 述第一颜色与所述第二颜色不同;
    响应于所述第一语音指令,所述电子设备启动所述第二控件,执行所述第一语音指令对应的操作;
    响应于所述第二语音指令,所述电子设备向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
  9. 一种控件显示方法,其特征在于,应用于一种电子设备,所述电子设备包括显示屏,所述方法包括:
    接收用户输入的唤醒词;
    响应于接收到的所述唤醒词,在所述显示屏上显示第一应用的第一界面,所述第一界面中包括第一控件;
    接收用户输入的第一语音指令;
    响应于接收到的所述第一语音指令,显示所述第一应用的第二界面,所述第二界面中包括所述第一控件和第二控件,所述第二控件用于执行所述第一语音指令对应的操作。
  10. 根据权利要求9所述的方法,其特征在于,响应于接收到的所述第一语音指令,显示所述第一应用的第二界面之前,还包括:
    获得所述第一语音指令对应的文本内容,所述文本内容对应于所述第二控件;
    当所述第一应用的第一界面不包括所述第二控件时,获取所述第二控件。
  11. 根据权利要求10所述的方法,其特征在于,所述获取所述第二控件,包括:
    通过软件开发工具包SDK表获取所述第二控件,所述SDK表中包括所述文本内容和所述第二控件。
  12. 根据权利要求9-11任一项所述的方法,其特征在于,还包括:
    再次接收用户输入的所述第一语音指令;
    响应于所述第一语音指令,显示所述第一应用的第三界面,所述第三界面包括执行所述第一语音指令对应的操作后输出的服务响应。
  13. 根据权利要求12所述的方法,其特征在于,响应于所述第一语音指令,显示所述第一应用的第三界面,包括:
    启动所述第二控件,执行所述第一语音指令对应的操作,并在所述第一应用的第三界面显示所述服务响应;
    或者,所述电子设备接收服务器发送的所述服务响应,并在所述第一应用的第三界面显示所述服务响应。
  14. 根据权利要求9-13任一项所述的方法,其特征在于,显示所述第一应用的第二界面,包括:
    在所述第一应用的第二界面显示所述第二控件的控件图标;
    或者,在所述第一应用的第二界面显示所述第二界面的控件图标和所述第二控件的提示信息。
  15. 根据权利要求14所述的方法,其特征在于,所述第一应用的第二界面还包括第三控件的控件图标,所述第三控件用于执行第二语音指令对应的操作;
    所述第二控件的控件图标为第一颜色,所述第三控件的控件图标为第二颜色,所 述第一颜色与所述第二颜色不同;
    响应于所述第一语音指令,所述电子设备启动所述第二控件,执行所述第一语音指令对应的操作;
    响应于所述第二语音指令,所述电子设备向服务器发送指示信号,所述指示信号用于指示所述服务器执行所述第二语音指令对应的操作。
  16. 一种电子设备,其特征在于,包括处理器和存储器;
    所述存储器,用于存储计算机程序指令;
    所述处理器,用于执行所述存储器中存储的所述指令,使得所述电子设备执行如权利要求1至15中任一项所述的方法。
  17. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当其在计算机上运行时,使得计算机执行权利要求1至15中任一项所述的方法。
PCT/CN2021/106385 2020-07-28 2021-07-15 一种控件显示方法和设备 WO2022022289A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21849198.3A EP4181122A4 (en) 2020-07-28 2021-07-15 CONTROL DISPLAY METHOD AND APPARATUS
US18/006,703 US20230317071A1 (en) 2020-07-28 2021-07-15 Control display method and device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010736457.4A CN114007117B (zh) 2020-07-28 2020-07-28 一种控件显示方法和设备
CN202010736457.4 2020-07-28

Publications (1)

Publication Number Publication Date
WO2022022289A1 true WO2022022289A1 (zh) 2022-02-03

Family

ID=79920314

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/106385 WO2022022289A1 (zh) 2020-07-28 2021-07-15 一种控件显示方法和设备

Country Status (4)

Country Link
US (1) US20230317071A1 (zh)
EP (1) EP4181122A4 (zh)
CN (1) CN114007117B (zh)
WO (1) WO2022022289A1 (zh)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6615176B2 (en) * 1999-07-13 2003-09-02 International Business Machines Corporation Speech enabling labeless controls in an existing graphical user interface
US20120078635A1 (en) * 2010-09-24 2012-03-29 Apple Inc. Voice control system
CN103869931A (zh) * 2012-12-10 2014-06-18 三星电子(中国)研发中心 语音控制用户界面的方法及装置
CN104184890A (zh) * 2014-08-11 2014-12-03 联想(北京)有限公司 一种信息处理方法及电子设备
CN104599669A (zh) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 一种语音控制方法和装置
US9081550B2 (en) * 2011-02-18 2015-07-14 Nuance Communications, Inc. Adding speech capabilities to existing computer applications with complex graphical user interfaces
CN105957530A (zh) * 2016-04-28 2016-09-21 海信集团有限公司 一种语音控制方法、装置和终端设备
CN110060672A (zh) * 2019-03-08 2019-07-26 华为技术有限公司 一种语音控制方法及电子设备
CN110691160A (zh) * 2018-07-04 2020-01-14 青岛海信移动通信技术股份有限公司 一种语音控制方法、装置及手机

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103200329A (zh) * 2013-04-10 2013-07-10 威盛电子股份有限公司 语音操控方法、移动终端装置及语音操控系统
JP5955299B2 (ja) * 2013-11-08 2016-07-20 株式会社ソニー・インタラクティブエンタテインメント 表示制御装置、表示制御方法、プログラム及び情報記憶媒体
US10504509B2 (en) * 2015-05-27 2019-12-10 Google Llc Providing suggested voice-based action queries
US10586535B2 (en) * 2016-06-10 2020-03-10 Apple Inc. Intelligent digital assistant in a multi-tasking environment
CN113794800B (zh) * 2018-11-23 2022-08-26 华为技术有限公司 一种语音控制方法及电子设备
CN110225386B (zh) * 2019-05-09 2021-09-14 海信视像科技股份有限公司 一种显示控制方法、显示设备

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6615176B2 (en) * 1999-07-13 2003-09-02 International Business Machines Corporation Speech enabling labeless controls in an existing graphical user interface
US20120078635A1 (en) * 2010-09-24 2012-03-29 Apple Inc. Voice control system
US9081550B2 (en) * 2011-02-18 2015-07-14 Nuance Communications, Inc. Adding speech capabilities to existing computer applications with complex graphical user interfaces
CN103869931A (zh) * 2012-12-10 2014-06-18 三星电子(中国)研发中心 语音控制用户界面的方法及装置
CN104184890A (zh) * 2014-08-11 2014-12-03 联想(北京)有限公司 一种信息处理方法及电子设备
CN104599669A (zh) * 2014-12-31 2015-05-06 乐视致新电子科技(天津)有限公司 一种语音控制方法和装置
CN105957530A (zh) * 2016-04-28 2016-09-21 海信集团有限公司 一种语音控制方法、装置和终端设备
CN110691160A (zh) * 2018-07-04 2020-01-14 青岛海信移动通信技术股份有限公司 一种语音控制方法、装置及手机
CN110060672A (zh) * 2019-03-08 2019-07-26 华为技术有限公司 一种语音控制方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4181122A4

Also Published As

Publication number Publication date
US20230317071A1 (en) 2023-10-05
EP4181122A4 (en) 2024-01-10
CN114007117B (zh) 2023-03-21
EP4181122A1 (en) 2023-05-17
CN114007117A (zh) 2022-02-01

Similar Documents

Publication Publication Date Title
US11861161B2 (en) Display method and apparatus
US20220342850A1 (en) Data transmission method and related device
US20170235435A1 (en) Electronic device and method of application data display therefor
WO2021078284A1 (zh) 一种内容接续方法及电子设备
WO2021259100A1 (zh) 分享方法、装置和电子设备
TWI438675B (zh) 提供情境感知援助說明之方法、裝置及電腦程式產品
US11647108B2 (en) Service processing method and apparatus
US9811510B2 (en) Method and apparatus for sharing part of web page
WO2021204098A1 (zh) 语音交互方法及电子设备
US20170017451A1 (en) Method and system for managing applications running on smart device using a wearable device
CN114286165B (zh) 一种显示设备、移动终端、投屏数据传输方法及系统
WO2015096747A1 (zh) 操作响应方法、客户端、浏览器及系统
WO2020200173A1 (zh) 文档输入内容的处理方法、装置、电子设备和存储介质
CN111684778A (zh) 应用功能的实现方法及电子设备
JP2023506936A (ja) マルチ画面共働方法およびシステム、ならびに電子デバイス
WO2022068483A9 (zh) 应用启动方法、装置和电子设备
WO2021244429A1 (zh) 一种控制应用程序安装的方法及装置
US20240069850A1 (en) Application Sharing Method, Electronic Device, and Storage Medium
AU2014293763A1 (en) Method and apparatus for providing Graphic User Interface
WO2023130921A1 (zh) 一种适配多设备的页面布局的方法及电子设备
US9734538B2 (en) Integrated operation method for social network service function and system supporting the same
JP7234379B2 (ja) スマートホームデバイスによってネットワークにアクセスするための方法および関連するデバイス
CN112911380B (zh) 一种显示设备及与蓝牙设备的连接方法
CN105320616A (zh) 外部设备控制方法及装置
CN113784200A (zh) 通信终端、显示设备及投屏连接方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21849198

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021849198

Country of ref document: EP

Effective date: 20230207

NENP Non-entry into the national phase

Ref country code: DE