CN116208814A - Method for setting starting function and electronic equipment - Google Patents

Method for setting starting function and electronic equipment Download PDF

Info

Publication number
CN116208814A
CN116208814A CN202111443430.7A CN202111443430A CN116208814A CN 116208814 A CN116208814 A CN 116208814A CN 202111443430 A CN202111443430 A CN 202111443430A CN 116208814 A CN116208814 A CN 116208814A
Authority
CN
China
Prior art keywords
interface
setting item
function
event
target setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111443430.7A
Other languages
Chinese (zh)
Inventor
燕琼瑶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202111443430.7A priority Critical patent/CN116208814A/en
Publication of CN116208814A publication Critical patent/CN116208814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04847Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a setting method for starting a function and electronic equipment, relates to the technical field of terminals, and can provide setting references in the process of setting the function so as to improve the efficiency of man-machine interaction. The electronic device displays a first interface including operation instruction information for the target setting item, the operation instruction information describing a manner of operating the target setting item to turn on a first function in the electronic device. The electronic device displays a second interface in response to the first event, the second interface including controls for the operation instruction information and the target setting item. And the electronic equipment responds to a second event of the control of the target setting item to finish setting the target setting item.

Description

Method for setting starting function and electronic equipment
Technical Field
The application relates to the technical field of terminals, in particular to a method for setting a starting function and electronic equipment.
Background
In the process of using electronic devices such as smart televisions (e.g., smart screens) and mobile phones, users often need to start certain settings before realizing corresponding functions. For example, the voice wake-up settings need to be turned on before the "intelligent voice" feature can be used. Wherein, use "wisdom pronunciation" function, wisdom screen can be according to user's voice input, accomplishes appointed operation. Such as inquiring weather, making a call, etc. Accordingly, before a certain function is used, a user needs to actively query a setting item to be opened and complete the setting, and then the function can be used.
However, the inventor found that, in implementing the embodiments of the present application, the following problems exist in the solution of completing the corresponding setting before using a certain function in the prior art: first, some users may not know which settings need to be opened, resulting in a difficult opening of settings. Secondly, even if the user clearly needs to start the setting, the user also needs to search the corresponding setting step by step, then the setting is completed, and the setting efficiency is low.
In view of the foregoing, a solution for opening a function is needed to solve the problems of difficulty in opening a corresponding setting and low setting efficiency before using a certain function.
Disclosure of Invention
The embodiment of the application provides a setting method for starting a function and electronic equipment, wherein setting references are provided for a user in the process of setting the function so as to improve the efficiency of man-machine interaction.
In a first aspect, an embodiment of the present application provides a method for setting an opening function, where the method may be applied to electronic devices such as a television (smart tv), a mobile phone, and the like, where the electronic device may implement the first function. The first function needs to be turned on after the setting of the corresponding setting item is completed. The electronic device displays a first interface, for example, the first interface is a guiding interface for starting a first function. The first interface includes an operation step of starting a first function in the electronic device, and the operation step includes operation instruction information of the target setting item. Wherein the operation instruction information is used to describe a manner in which the operation target setting item is opened to start the first function. The target setting item is a setting item that supports the electronic device to implement the first function. Taking the electronic device as an intelligent television, the first function is a "smart voice" function in the intelligent television as an example, if the "smart voice" function needs to be turned on after the voice recognition setting item in the system setting is turned on, the operation instruction information may be "please turn on the voice recognition setting item". The electronic device may then receive a first event that triggers the electronic device to use the first function. Taking the electronic device as an example of intelligent electricity, the first event may be a signal using a first function sent by the remote controller to the intelligent television. The electronic device, in response to the first event, may display a second interface including controls for the operational description information and the target setting item. Wherein the control of the target setting item can be used for setting the target setting item. That is, the second interface includes a setting reference, such as operation instruction information, in addition to the control including the target setting item. The setting can then be completed according to the setting reference. And then, the electronic equipment can receive a second event of the control of the target setting item, wherein the second event is used for triggering the electronic equipment to set the control of the displayed target setting item. The electronic device may set the control of the target setting item in response to the second event.
In summary, by adopting the method of the embodiment of the present application, the electronic device may display the control and the setting reference of the target setting item in the electronic device at the same time based on the user's requirement for the first function (i.e., the first event). Therefore, the comparison setting can be completed in one interface, the setting difficulty is reduced, and the setting efficiency is improved.
In one possible design manner of the first aspect, after completing the setting of the target setting item, the electronic device displays a third interface in response to the first event. The third interface is an interface that implements the first function.
Therefore, by adopting the method of the embodiment of the application, after the setting is completed, the electronic device responds to the user's use requirement (namely the first event) of the first function again, and can quickly enter the use of the first function.
In another possible design of the first aspect, the first interface includes a first option. For example, the first option is an "experience immediately" button 603 in (a) in fig. 6A. The first event includes an event that selects the first option. Illustratively, the event that selects the first option may be: and clicking or long-pressing the first option by the user. Also for example, the electronic device is a television, and the event for selecting the first option may be a control command for selecting the first option sent to the television by a remote controller of the television. For example, after positioning the active cursor to the first option, the user clicks a confirmation key of the remote controller, so that the remote controller sends a control command to the television to select the first option. Alternatively, the first event comprises an event of a user input of a preset gesture. Alternatively, the first event includes an event in which a user inputs a preset voice. For example, the preset voice is "please open … … (first function)".
Therefore, by adopting the method of the embodiment of the application, the electronic device can identify the operation of the user on the first interface directly or indirectly as the operation with the use requirement on the first function.
In another possible design manner of the first aspect, the electronic device stores a plurality of functions in the electronic device, and a correspondence relationship between the functions and call parameters of setting items supporting the electronic device to implement the functions; the plurality of functions includes a first function. The call parameters may be set item paths or interface parameters, etc. Taking the example that the call parameter is a setting item path, two pieces of data in the corresponding relationship are as follows: the first, function "intelligent voice", call parameters "setup/intelligent assistant/intelligent voice"; second, function "smart screen controlled by cell phone", call parameter "set/network and link/smart screen manipulation/link". Correspondingly, the electronic device responding to the first event, and displaying the second interface further comprises: the electronic equipment responds to the first event, searches the first function from the corresponding relation, and searches the first calling parameter corresponding to the first function. And the electronic equipment calls the target setting item according to the first calling parameter, and then controls of the target setting item can be displayed in the second interface according to the calling result so as to finish setting of the opening function.
Therefore, by adopting the method of the embodiment, the electronic equipment can conveniently call the target setting item for setting according to the stored corresponding relation.
In another possible design manner of the first aspect, the electronic device displays, in response to the first event, a second interface, including: and if the target setting item is in a closed state, the electronic equipment responds to the first event and displays a second interface. And if the target setting item is in an open state, the electronic equipment responds to the first event, and the electronic equipment displays a third interface. That is, only in the case where the target setting item is not turned on, the electronic device simultaneously displays the information of the target setting item and the target setting item for the collation setting in response to the first event. And in case the target setting item is already on, the first function can be used directly.
Therefore, by adopting the method of the embodiment, the electronic equipment can selectively display the target setting items, so that invalid display is avoided, and the display rationality is improved.
In addition, if there are multiple target setting items, the target setting items are in a closed state, including: at least one item setting item is in a closed state; the target setting item is in an open state, comprising: all target setting items are in an on state.
In another possible design of the first aspect, the second interface includes a first interface and a floating window displayed on the first interface, and the target setting item is displayed in the floating window.
Therefore, by adopting the method of the embodiment, the electronic device can realize that the control of the target setting item and the setting reference are simultaneously included in the display screen in the form of displaying the floating window on the first interface and displaying the control of the target setting item in the floating window. Therefore, the interface does not need to be changed greatly, and the display efficiency can be improved.
In another possible design manner of the first aspect, in the form of the control for displaying the target setting item in the floating window, after the setting of the target setting item is completed, the electronic device may further receive a third event, where the third event is used to trigger the electronic device to close the floating window. Taking the example that the electronic device is a television, the third event may be: the user presses a return key on the remote control of the television to cause the remote control to send a control command to the television to return to the first interface. The electronic device may close the floating window in response to a third event. So that the display of the target setting item can be turned off. Subsequently, the electronic device displays a third interface in response to the first event, including: after the electronic device closes the floating window, the electronic device displays a third interface in response to the first event.
Therefore, by adopting the method of the embodiment, after the setting of the target setting item is completed, the electronic device can close the floating window, so that the first event can be responded again conveniently, and the use interface of the first function is displayed.
In another possible design manner of the first aspect, the second interface is a split-screen interface, one of the split-screens is a first interface, the first interface displays operation instruction information, and the other of the split-screens displays a control of the target setting item.
Therefore, by adopting the method of the embodiment, the electronic equipment can realize the control and the setting reference which simultaneously comprise the target setting item in the display screen in the form of split-screen display. And controls and setting references for the target setting item may be made to have explicit display limits.
In another possible design manner of the first aspect, taking the electronic device as an example of a television, in the form of displaying the target setting item in the split screen, after the setting of the target setting item is completed, the active cursor is in the split screen displaying the target setting item. In this case, the television may also receive a control command for the user to operate the remote controller to cause the remote controller to move the active cursor transmitted to the television. The television responds to the control command for moving the movable cursor, and can move the movable cursor into a split screen for displaying operation instruction information in the second interface. Subsequently, the electronic device displays a third interface in response to the first event, including: after the television moves the movable cursor into a split screen for displaying operation instruction information in the second interface, the electronic equipment responds to the first event and displays a third interface.
Therefore, by adopting the method of the embodiment, after the setting of the target setting item is completed, the electronic device can move the movable cursor to the split screen for displaying the operation description information in the second interface, so that the user interface of the first function can be displayed conveniently and again in response to the first event.
In another possible design manner of the first aspect, before the electronic device displays the first interface, the electronic device may display a fourth interface, where the fourth interface includes a plurality of options, and the plurality of options are in one-to-one correspondence with a plurality of functions of the electronic device. For example, the fourth interface may be the interface 1501 shown in fig. 15. The plurality of options includes a "movie and television heddle free see" option and a "browse album across devices" option. Because the related setting items in the three-party application are acquired, the authorization of the three-party application is required, and therefore the difficulty and accuracy of the acquisition are difficult to ensure. Based on this, after the fourth interface is displayed, the electronic device may receive a selection operation of a second option, which is one of a plurality of options, the second option corresponding to the first function, and the target setting item supporting the electronic device to implement the first function being a system setting item. The electronic device may display a first interface in response to a selection operation of the second option. Or after the fourth interface is displayed, the electronic device may receive a selection operation of a third option, where the third option is one of multiple options, the third option corresponds to the second function, and the target setting item for supporting the electronic device to implement the second function is an application setting item, or the electronic device does not need to implement the second function or support the application setting item. The electronic device displays a fifth interface in response to the selection operation of the third option. The fifth interface does not include information of a target setting item for supporting the electronic device to implement the second function.
It can be seen that with the method of this embodiment, the electronic device displays the first interface including the information of the target setting item only after selecting the function (e.g., the first function) that needs to be supported by the system setting item. And after selecting the function (such as the second function) that does not need to be supported by the system setting item, the electronic device does not display the information of the target setting item. Thus, on the basis of facilitating the comparison setting, the difficulty of displaying the reference (such as the information of the target setting item) of the comparison setting is reduced.
In a second aspect, embodiments of the present application also provide an electronic device including a display screen, a memory, and one or more processors. The display screen, the memory, and the processor are coupled. The memory is for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the steps of: the electronic device displays a first interface, wherein the first interface comprises operation instruction information for a target setting item, and the operation instruction information is used for describing a mode of operating the target setting item to start a first function in the electronic device. The electronic equipment responds to the first event, and displays a second interface, wherein the second interface comprises the operation instruction information and the control of the target setting item. And the electronic equipment responds to a second event of the control of the target setting item to finish setting the target setting item.
In one possible design manner of the second aspect, the electronic device stores a correspondence between a plurality of functions in the electronic device and call parameters of setting items supporting the electronic device to implement the functions; the plurality of functions includes the first function. The computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment responds to the first event, searches the first function from the corresponding relation, and searches a first calling parameter corresponding to the first function. And the electronic equipment calls the target setting item according to the first calling parameter, and displays a control of the target setting item in the second interface.
In another possible design of the second aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: and if the target setting item is in a closed state, the electronic equipment responds to the first event and displays the second interface.
In another possible design of the second aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: and if the target setting item is in an opening state, the electronic equipment responds to the first event and displays a third interface. The third interface is an interface that implements the first function.
In another possible design manner of the second aspect, the target setting item has a plurality of items, and the target setting item is in a closed state, including: at least one of the target setting items is in a closed state; the target setting item is in an open state, and comprises: all the target setting items are in an open state.
In another possible design manner of the second aspect, the second interface includes the first interface and a floating window displayed on the first interface, and the control of the target setting item is displayed in the floating window.
In another possible design of the second aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic device closes the floating window in response to a third event. The electronic equipment responds to the first event and displays a third interface, wherein the third interface is an interface for realizing the first function.
In another possible design manner of the second aspect, the electronic device is a television, and the third event includes: and the remote controller of the television sends a control command returned to the first interface to the television.
In another possible design manner of the second aspect, the second interface is a split screen interface, one interface of the second interface is the first interface, and the other interface of the second interface displays the control of the target setting item.
In another possible design of the second aspect, the electronic device is a television. The computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: and the television responds to a control command for moving an active cursor sent by a remote controller of the television, and moves the active cursor to the first interface for displaying the operation instruction information in the second interface. And after the television moves the movable cursor to the first interface for displaying the operation instruction information in the second interface, the electronic equipment responds to the first event and displays a third interface. The third interface is an interface that implements the first function.
In another possible design manner of the second aspect, the first interface includes a first option, and the first event includes an event that selects the first option; alternatively, the first event includes an event of a user input preset gesture; alternatively, the first event includes an event in which a user inputs a preset voice.
In another possible design manner of the second aspect, the electronic device is a television, and the event that the user selects the first option includes: and the remote controller of the television sends a control command for selecting the first option to the television.
In another possible design of the second aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic equipment displays a fourth interface, wherein the fourth interface comprises a plurality of options, and the options are in one-to-one correspondence with a plurality of functions of the electronic equipment. The electronic device displays the first interface in response to a selection operation of a second option. The second option is one of the plurality of options, the second option corresponds to a first function, and the target setting item supporting the electronic device to realize the first function is a system setting item.
In another possible design of the second aspect, the computer instructions, when executed by the processor, cause the electronic device to further perform the steps of: the electronic device displays a fifth interface in response to a selection operation of the third option. The third option is one of the plurality of options, the third option corresponds to a second function, and the target setting item supporting the electronic device to realize the second function is an application setting item. And the fifth interface does not comprise information of a target setting item for supporting the electronic equipment to realize the second function. The electronic equipment responds to the first event and displays a sixth interface, wherein the sixth interface is an interface for realizing the second function.
In a third aspect, embodiments of the present application provide a chip system that is applied to an electronic device including a display screen and a memory; the system-on-chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected through a circuit; the interface circuit is configured to receive a signal from a memory of the electronic device and to send the signal to the processor, the signal including computer instructions stored in the memory; when the processor executes the computer instructions, the electronic device performs the method according to the first aspect and any one of its possible designs.
In a fourth aspect, the present application provides a computer storage medium comprising computer instructions which, when run on an electronic device, cause the electronic device to perform a method as described in the first aspect and any one of its possible designs.
In a fifth aspect, the present application provides a computer program product which, when run on a computer, causes the computer to perform the method according to the first aspect and any one of its possible designs.
It will be appreciated that the advantages achieved by the electronic device according to the second aspect, the chip system according to the third aspect, the computer storage medium according to the fourth aspect, and the computer program product according to the fifth aspect may refer to the advantages of the first aspect and any one of the possible designs thereof, which are not described herein.
Drawings
Fig. 1 is one of schematic diagrams of an intelligent television interface provided in an embodiment of the present application;
fig. 2 is a second schematic diagram of an intelligent tv interface provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a hardware structure of an intelligent television according to an embodiment of the present application;
fig. 4 is a schematic diagram of a remote controller panel adapted to an intelligent television according to an embodiment of the present application;
FIG. 5 is one of the flowcharts of the method for setting the opening function according to the embodiment of the present application;
FIG. 6A is a third schematic diagram of an intelligent television interface according to an embodiment of the present disclosure;
FIG. 6B is a schematic diagram of an event 1 according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a smart tv interface provided in an embodiment of the present application;
fig. 8 is a schematic diagram of a smart tv interface provided in an embodiment of the present application;
Fig. 9 is a schematic diagram of a smart tv interface provided in an embodiment of the present application;
FIG. 10 is a schematic diagram of an event 3 according to an embodiment of the present application;
fig. 11 is a schematic diagram of a smart tv interface provided in an embodiment of the present application;
FIG. 12 is a second flowchart of a method for setting an opening function according to an embodiment of the present disclosure;
fig. 13 is a schematic diagram eighth of a smart tv interface provided in an embodiment of the present application;
fig. 14 is a ninth schematic diagram of a smart tv interface provided in an embodiment of the present application;
fig. 15 is a schematic diagram of a smart tv interface provided in an embodiment of the present application;
fig. 16 is an eleventh schematic diagram of a smart tv interface provided in an embodiment of the present application;
fig. 17 is a block diagram of a chip system according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method for setting an opening function, which can be applied to electronic equipment capable of realizing specific functions, wherein the specific functions are functions which can be opened only by completing corresponding settings in advance. By way of example, the electronic device may be a mobile phone, a smart television (smart screen), a tablet computer, a desktop, a laptop, a handheld computer, a notebook, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a cellular phone, a personal digital assistant (personal digital assistant, PDA), an augmented reality (augmented reality, AR) \virtual reality (VR) device, and the like, and the specific form of the electronic device is not particularly limited in the embodiments of the present application.
Further, the method can be applied to a scene of starting a specific function. For example, before the screen throwing function is started, the screen throwing setting scene is completed. For another example, the method is applied to a scene of completing positioning setting before starting a positioning function. For another example, before the "intelligent voice" function is started, the scene of voice wake-up setting item setting is completed.
In the following embodiments, the above electronic device is mainly taken as an example of smart phone, and the solution of the present application will be described in connection with a scenario corresponding to a function introduced by opening a player skill application (e.g., a smart voice function). In the player skill application of the smart television, certain functions in the smart television can be introduced in the form of videos, characters and the like, and a user can be guided to use the functions.
The following describes the machine playing skill application of the smart tv in conjunction with fig. 1 and 2, and describes the functions described for the machine playing skill application, completing the specific implementation of the corresponding setup prior to starting.
Referring to fig. 1, after the smart tv is started, the smart tv may display an interface 101 shown in (a) in fig. 1, where the interface 101 includes application icons of a plurality of applications, for example, an application icon of an educational center application, an application icon of a media center application, and an application icon 102 of a game skill application. The intelligent television can receive the selection operation of any application icon by the user. It should be understood that in the smart tv, a user may select various interface elements, such as icons, menu options, etc., by means of a remote controller, voice, gesture, or finger touch. In response to the user selecting the application icon 102, the smart television may display the interface 103 shown in (b) in fig. 1, where the interface 103 includes a primary menu of a machine playing skill application, such as quick-up, audio-visual entertainment, affinity interaction, practical skills, a user manual, and the like, and the interface 103 also includes a secondary menu of a currently selected primary menu, such as quick-up, where the currently selected primary menu includes a secondary menu of interface operation and a secondary menu 104 of smart voice. The smart television may receive a selection operation of the user on the primary menu and its corresponding secondary menu in the interface 103. In response to a user's selection operation of the secondary menu 104 (i.e., the "smart voice" function), the smart television may introduce the "smart voice" function in the form of video, text, etc., and after the introduction is completed, the interface 105 shown in fig. 1 (c) may be displayed, where the interface 105 includes an operation key point for turning on the "smart voice" function. If ' 01 says ' hello yoyoyo ' to the screen, wake up the intelligent voice; 02 … …). It should be understood that, in the example shown in fig. 1, the operation points are presented as examples after the introduction of the "smart voice" function is completed, so that the presentation of the operation points is described. In actual implementation, the intelligent television can gradually introduce the operation key points in the process of introducing the intelligent voice function. The embodiment of the present application is not particularly limited thereto.
Then, the user needs to exit the interface of the current prompting operation key point and enter the system setting interface to set setting items such as voice awakening, voice feature recognition and the like required for starting the intelligent voice function. For example, the smart tv may receive an operation of a user to initiate a system setting, such as an operation of selecting a setting icon. The smart tv may display an interface 201 shown in (a) of fig. 2 in response to an operation to start system settings, the interface 201 including a plurality of system setting items such as image, sound, smart assistant, etc. The smart television may receive a user selected operation of any of the settings in the interface 201. The smart tv may display an interface 203 shown in (b) of fig. 2 in response to a user's selection operation of the setup item 202 of the smart assistant. The interface 203 includes a smart voice setting item 204. The smart television may receive a user selected operation of the smart voice setting item 204. The smart tv may display an interface 205 shown in (c) of fig. 2 in response to a user's selection operation of the smart voice setting item 204. The interface 205 includes settings required to implement the "intelligent voice" function, such as one or more of voice wakeup, continuous dialogue, dialect recognition, voice feature recognition, and the like. The smart tv may receive an on or off operation of a user for each setting item required to start the "smart voice" function in the interface 205. Thereby completing the setting of the intelligent voice function. After the setting is completed, the intelligent voice function can be realized. For example, a voice assistant of the smart television is awakened, and voice instructions of the user are recognized by the voice assistant.
In some embodiments, the basic services in the "intelligent voice" function, such as a service that wakes up a voice assistant and recognizes voice instructions entered by the user, are implemented only after the voice wake-up settings are turned on. In this embodiment, the setting items required to be set for turning on the "smart voice" function may include only the voice wake-up setting item.
In other embodiments, the upgrade service in the "smart voice" function, such as a dialect recognition service, a man-machine continuous dialogue service, a user group recognition service, etc., needs to further set a corresponding setting item of the service while starting the voice wake-up setting item. For example, for the service of identifying the dialect, the type of dialect identification, such as Sichuan, beijing, etc., needs to be set. That is, the setting items required to turn on the "smart voice" function also include dialect recognition setting items. Also, for example, for the service of man-machine continuous dialogue, a continuous dialogue setting item needs to be opened. That is, the setting items required to turn on the "smart voice" function also include continuous dialogue setting items. Further exemplary, for a user group identified service, the voice feature identification setting item is also turned on. That is, the setting items required to turn on the "smart voice" function also include voice feature recognition setting items. After the voice feature recognition setting item is turned on, the smart television can recognize a user group, such as the elderly, adults, children, etc., who input voice instructions.
In the following embodiments, the present application will be described mainly taking the setting items required to turn on the "smart voice" function as voice wake-up recognition items and voice feature recognition setting items as examples.
It should be noted that the foregoing examples of the settings shown in fig. 1 and 2 are merely examples of the settings for turning on the "smart voice" function in the player skill application, and in practice, other functions in the player skill application may need to be turned on after the corresponding settings are previously turned on. For example, the setting of the system setting-network and link-intelligent screen control-based intelligent screen setting item needs to be completed in advance to start the function of controlling the intelligent screen of the mobile phone. For another example, the system setup-network and link-WLAN setup items need to be pre-completed to enable the cross-device browsing album function.
Therefore, in the above scenario that the specific function needs to be started by completing the corresponding setting in advance, the user needs to actively enter the corresponding interface, for example, the system setting interface and the application setting interface, and search the setting items to be set step by step, and then the setting can be completed. Taking the above scenario shown in fig. 1 and fig. 2 as an example, the user needs to actively enter the system setting interface, find the setting items such as voice wake-up and voice feature recognition step by step, and then complete the setting. In practice, however, on the one hand, the user may not be aware of the setting items set required to realize a specific function, or of the paths along which the setting items are located. On the other hand, even if the user clearly sets the items and paths, the user needs to search step by step, and in the searching process, the user may forget to set the items and paths. In the two aspects, the difficulty in completing the setting is high, and the operation is complicated. Thereby affecting the efficiency of human-machine interaction.
Based on the above, the embodiment of the application provides a method for setting the starting function, which can be applied to electronic devices such as smart televisions, mobile phones, personal computers and the like. In these electronic devices, specific functions may be provided that require corresponding settings to be completed in advance to be turned on. The electronic device may display an interface a, where the interface a includes an operation step for starting a specific function, and the operation step includes operation instruction information of a target setting item. The target setting item is a setting item for supporting the electronic device to realize a specific function, and the operation instruction information is used for describing a mode of operating the target setting item to start the specific function in the electronic device. Such as the name of the setting item, the path the setting item is located on, etc. For example, if the specific function is a "smart voice" function, and the "smart voice" function needs to be turned on after the voice recognition setup item in the system setup is turned on, the operation instruction information may be "please turn on the voice recognition setup item". The operation instruction information explicitly indicates the setting items to be set, and the specific setting modes, such as "on". The electronic device may then receive a specific event that triggers the electronic device to use a specific function. The electronic device may simultaneously display the operation instruction information and the control of the target setting item in the display screen of the electronic device in response to the specific event. Wherein the control of the target setting item can be used for setting the target setting item. Thereafter, the setting can be simply and quickly completed with reference to the operation instruction information.
In summary, by adopting the method of the embodiment of the application, the electronic device simultaneously displays the setting reference and the set control, on one hand, the specific setting mode can be explicitly indicated, and on the other hand, the control can be directly provided for setting without step-by-step searching by a user. Therefore, the electronic equipment can simplify the operation steps of corresponding setting before starting the specific function, and reduce the setting difficulty. Thereby improving the efficiency of man-machine interaction.
The implementation of the examples of the present application will be described in detail below with reference to the accompanying drawings. In the embodiment of the application, the scheme of the application is mainly described by taking the example that the electronic equipment is an intelligent television (such as an intelligent screen).
Please refer to fig. 3, which is a schematic structural diagram of an intelligent tv 300 according to an embodiment of the present application. As shown in fig. 3, the smart tv 300 may include: processor 310, external memory interface 320, internal memory 321, universal serial bus (universal serial bus, USB) interface 330, power management module 340, antenna, wireless communication module 360, audio module 370, speaker 370A, microphone 370C, speaker interface 370B, sensor module 380, keys 390, indicators 391, camera 393, and display 392, among others.
The sensor module 380 may include a distance sensor, a proximity sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, and the like.
It is to be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the smart tv 300. In other embodiments, the smart television 300 may include more or fewer components than shown, or may combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 310 may include one or more processing units, such as: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a memory, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller may be a neural hub and command center of the intelligent television 300. The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
In some embodiments, processor 310 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, and/or a USB interface, among others.
The power management module 340 is used for connecting to a power source. The charge management module 340 may also be coupled to the processor 310, the internal memory 321, the display screen 394, the camera 393, the wireless communication module 360, and the like. The power management module 341 receives input from the power source and provides power to the processor 310, the internal memory 321, the display screen 394, the camera 393, the wireless communication module 360, and the like. In some embodiments, the power management module 341 may also be disposed in the processor 310.
The wireless communication function of the smart tv 300 may be implemented through an antenna and wireless communication module 360, etc. The wireless communication module 360 may provide a solution for wireless communication including wireless local area network (wireless local area networks, WLAN) (such as Wi-Fi (wireless fidelity) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), and the like, which are applied to the smart tv 300.
The smart tv 300 implements display functions through a GPU, a display screen 392, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 392 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 310 may include one or more GPUs that execute program instructions to generate or change display information.
The smart tv 300 may implement a photographing function through an ISP, a camera 393, a video codec, a GPU, a display 392, an application processor, and the like. The ISP is used to process the data fed back by camera 393. In some embodiments, the ISP may be provided in the camera 393.
Alternatively, the smart tv 300 may not include a camera, i.e., the camera 393 is not disposed in the smart tv 300. The smart television 300 may be externally connected to the camera 393 through an interface (e.g., the USB interface 330). The external camera 393 may be fixed to the smart tv 300 by an external fixing member (e.g., a camera bracket with a clip). For example, the external camera 393 may be fixed at the edge of the display screen 392 of the smart tv 300, such as at the upper edge, by an external fixture.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the smart tv 300 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, etc. Video codecs are used to compress or decompress digital video. The intelligent tv 300 may support one or more video codecs. Thus, the smart tv 300 may play or record video in a plurality of encoding formats, for example: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent cognition of the intelligent television 300 can be realized through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the smart television 300. The external memory card communicates with the processor 330 via the external memory interface 320 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 321 may be used to store computer executable program code comprising instructions. The processor 330 executes various functional applications of the smart tv 300 and data processing by executing instructions stored in the internal memory 321. For example, in an embodiment of the present application, the processor 310 may execute instructions stored in the internal memory 321, and the internal memory 321 may include a stored program area and a stored data area.
The smart television 300 may implement audio functions through an audio module 370, a speaker 370A, a microphone 370C, a speaker box interface 370B, an application processor, and the like. Such as music playing, recording, etc.
The audio module 370 is used to convert digital audio information into an analog audio signal output and also to convert an analog audio input into a digital audio signal. The audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be disposed in the processor 310, or some of the functional modules of the audio module 370 may be disposed in the processor 310. Speaker 370A, also known as a "horn," is used to convert audio electrical signals into sound signals. Microphone 370C, also referred to as a "microphone," is used to convert sound signals into electrical signals.
The speaker interface 370B is used to connect with a wired speaker. The speaker interface 370B may be a USB interface 330 or a 3.5mm open mobile electronic device platform (open mobile terminal platform, OMTP) standard interface, a american cellular telecommunications industry association (cellular telecommunications industry association of the USA, CTIA) standard interface.
The keys 390 include a power on key, a volume key, etc. Key 390 may be a mechanical key. Or may be a touch key. The smart tv 300 may receive key inputs, generating key signal inputs related to user settings and function controls of the smart tv 300.
The indicator 391 may be an indicator light, which may be used to indicate that the smart tv 300 is in an on state, a standby state, or an off state, etc. For example, the indication lamp is turned off, which can indicate that the smart television 300 is in a power-off state; the indication lamp is green or blue, and can indicate that the intelligent television 300 is in a standby state; the indicator light is red, and can indicate that the smart television 300 is in a standby state.
In the embodiment of the present application, the smart tv 300 may be controlled by a remote controller adapted to the smart tv 300, or by one or more of voice, gesture, and touch operation on a screen.
Fig. 4 is a schematic panel diagram of a remote controller 400 according to an embodiment of the disclosure. As shown in fig. 4, the remote controller 400 may include: a plurality of keys, such as one or more of a return home interface key 410, a power key 420, an up key 431, a right key 432, a down key 433, a left key 434, a ok key 440, a return key 450, a voice input key 460, a menu key 470, and a volume key 480. The keys on the remote control 400 may be mechanical keys or touch keys. The remote controller 400 may receive key inputs, generate key signal inputs related to user settings and function controls of the smart tv 300, and transmit corresponding control signals to the smart tv 300 to control the smart tv 300. For example, the remote controller may transmit a control signal to the smart tv 300 through an infrared signal or the like.
In the embodiment of the present application, the up key 431, the right key 432, the down key 433 and the left key 434 in the remote controller 400 may be used to control the movement of an active cursor on the display 392 of the smart tv 300. For example, the remote controller 400 may receive a user's key input to the down key 433, generate a key signal input to move downward, and send a corresponding control signal to the smart tv 300 to control the active cursor on the display 392 of the smart tv 300 to move downward. It will be appreciated that the active cursor movement may be controlled by the combined use of up key 431, right key 432, down key 433 and left key 434.
In this embodiment of the present application, the determining key 440 in the remote controller 400 is used in combination with the above-mentioned up key 431, right key 432, down key 433, and left key 434, so as to control the smart tv 300 to turn on the content pointed by the active cursor. The process is the selected process. Illustratively, the smart television 300 is controlled to move the active cursor to the application icon of the player skill application by one or more of the up key 431, the right key 432, the down key 433, and the left key 434. Then, by determining the key 440, the smart tv 300 is controlled to start the player skill application.
And, the remote controller 400 may further include a battery receiving chamber (not shown) for mounting a battery to supply power to the remote controller 400.
The methods in the following embodiments may be implemented in the smart tv 300 having the above-described hardware structure, and the methods in the embodiments of the present application will be described below by taking the smart tv 300 as an example.
The embodiment of the application provides a setting method of an opening function, which can simultaneously display an operation step of opening a specific function (such as a function 1) and a target setting item to improve the man-machine interaction efficiency of a process of completing corresponding setting before opening the specific function. The mode for realizing simultaneous display comprises a floating window mode, a popup window mode, a message card mode or a split screen mode and the like. The scheme of the application will be mainly described in two forms of a floating window and a split screen.
In one mode, a suspended window is formed.
Fig. 5 is a flowchart of a method for setting an opening function according to an embodiment of the present application. As shown in fig. 5, in this embodiment, the method for setting the opening function may include:
s501, the smart tv 300 displays an interface a, where the interface a includes an operation step of turning on the function 1 (i.e., a specific function), and the operation step includes operation instruction information of the target setting item. Wherein the target setting item is a setting item required to be set for starting the function 1. The operation instruction information is used to describe the manner in which the target setting item is operated to turn on the function 1.
In this embodiment of the present application, for convenience of explanation, the interface a may also be referred to as a first interface, the function 1 may be referred to as a first function, and the target setting item may be understood as a setting item for supporting the smart television 300 to implement the function 1.
Wherein, the function 1 is a function which needs to complete corresponding setting in advance to be started. For example, the "smart voice" function may be set by starting the voice wake-up and voice feature recognition in advance, and then the function 1 may be the "smart voice" function.
In this embodiment of the present application, the smart tv 300 may display the operation steps of turning on the function 1 in the interface a. For example, the interface a may be the interface 601 shown in (a) of fig. 6A, and the following operation steps "01 say" hello YOYO "against the screen, wake up smart voice are included in the interface 601; 02 … …). One of the differences from the interface 105 shown in (c) of fig. 1 is that: the operation steps of the interface 601 include "03 a voice wake setting item and a voice feature recognition setting item" (denoted as information 1), while the interface 105 does not include. Wherein the information 1 may indicate that a voice wake setting item and a voice feature recognition setting item need to be set. That is, information 1 is operation instruction information. It should be noted that in the above example, the content of step "03" is shown at the end of the operation steps in order to embody the distinction from the conventional art. However, the actual implementation is not limited thereto. For example, in the example of the "smart voice" function, the content of step "03" may be displayed in the first bar, followed by the sequential display of the contents of steps "01" and "02". That is, the specific content of the operation steps may be as follows: "01 setting a voice wake-up setting item and a voice feature recognition setting item; 02 speaking "hello YOYO" to the screen, waking up the wisdom voice; 03 presses the voice key … … "of the remote control. In this way, the display order can be made to coincide with the order in which the operations are performed.
In some embodiments, the operation description information may include a setting item name. For example, the operation instruction information may include the setting item names "voice wake setting item" and "voice feature recognition setting item" in the above information 1. In this way, the smart tv 300 may explicitly indicate setting items that need to be set.
In other embodiments, the operational specification information may also include a setup item path. For example, the operation instruction information is "03 in the interface 602 shown in (b) in fig. 6A sequentially via the set-wisdom assistant-wisdom voice, and the voice wake-up setting item and the voice feature recognition setting item are turned on. Such as setting the switch to an (on) state "(noted as information 2). The information 2 not only includes the setting item names "voice wake setting item" and "voice feature recognition setting item", but also includes a path where the voice wake setting item and the voice feature recognition setting item are located, that is: setup-wisdom assistant-wisdom voice. Thus, the smart tv 300 may also explicitly indicate the path along which the target setting item is located.
In other embodiments, the operational specification information may also include setting parameters of the target setting item. Still taking the above information 2 as an example, the information 2 includes "put the switch into the (on) state". The (on) state is the setting parameter, that is, the voice wake setting item and the voice feature recognition setting item need to be started. Thus, the smart tv 300 may also explicitly indicate specific setting parameters.
It should be noted that, in the embodiment of the present application, the target setting item may be a setting item in a system setting (may also be referred to as a system setting item), such as a voice wake setting item in a system setting; alternatively, the target setting item may also be a setting item in an application setting (may also be referred to as an application setting item), such as a privacy setting item in a three-party application, an application security setting item, or the like. If the smart tv 300 needs to acquire the related setting items in the three-party application, authorization of the three-party application is required, so that the difficulty and accuracy of the acquisition are difficult to ensure. Based on this, in some embodiments, the interface a is displayed only for a function that needs to be completed in advance for which the corresponding system setting can be started, that is, the target setting item is a function of the system setting item, and the operation instruction information is displayed in the interface a to explicitly indicate the system setting item to be set. In this way, the accuracy of the provided control information can be improved. In the following embodiments, the present application will also be described mainly taking an example in which the target setting item is a system setting item.
S502, the intelligent television 300 receives an event 1, wherein the event 1 is used for triggering the intelligent television 300 to use the function 1.
In the embodiment of the present application, for convenience of explanation, the event 1 may be referred to as a first event.
In some embodiments, option 1 is included in interface a, where option 1 is used to trigger smart television 300 to use function 1. Event 1 may be the receipt by the intelligent tv 300 of a signal from the remote controller 400 to select option 1. By way of example, interface a may be interface 601 shown in fig. 6A (a), where the interface 601 includes an "immediate experience" button 603, where the "immediate experience" button 603 is used to trigger the smart television 300 to use the "smart voice" function. That is, the "immediate experience" button 603 is option 1. As shown in (a) of fig. 6B, event 1 may be a signal received by the smart tv 300 from a determination of the remote controller 400 to select the "immediate experience" button by the user pressing a confirm key on the remote controller 400 after the user operates the remote controller 400 to move an active cursor on the smart tv 300 to the "immediate experience" button. Wherein, for convenience of explanation, option 2 may also be referred to as the first option.
In other embodiments, event 1 may be an event in which the user inputs voice 1 or somatosensory action (e.g., gesture) 1, where voice 1 or somatosensory action 1 is used to trigger smart television 300 to use function 1. The somatosensory motion 1 may be a click motion, for example. For convenience of explanation, the voice 1 may be referred to as a preset voice, and the somatosensory motion 1 may be referred to as a preset gesture.
In other embodiments, the display screen of the smart tv 300 supports a touch operation, and then the event 1 may be an operation 1 of the user on the interface a, and the operation 1 may be used to trigger the smart tv 300 to use the function 1. The operation 1 may be a click operation, a long press operation, a slide operation, or the like. In a specific implementation, operation 1 may be a click operation or a long press operation of option 1 in interface a by the user. For example, as shown in (B) of fig. 6B, event 1 may be a click operation of the "experience immediately" button by the user.
It should be understood that the above list is only a few typical forms of the event 1, and in fact, the event 1 may be any event that can trigger the smart tv 300 to use the function 1, which is not described herein.
S503, the intelligent television 300 responds to the event 1, and a floating window is displayed in the interface a, wherein the floating window comprises a control of the target setting item.
In the embodiment of the present application, for convenience of explanation, the whole including the interface a and the floating window on the interface a may be referred to as a second interface.
Illustratively, taking function 1 as the "intelligent voice" function, interface a is interface 601 shown in fig. 6A (a). The smart tv 300 may display an interface 701 shown in (a) of fig. 7 in response to event 1. Unlike the interface 601 shown in (a) of fig. 6A,: the interface 701 includes not only operation instruction information of a target setting item required to be set for realizing the "smart voice" function, such as "03 setting voice wake-up setting item and voice feature recognition setting item", but also a floating window 702. The floating window 702 includes controls for target settings, such as voice wake, voice feature recognition, etc. Thereby being used for turning on the intelligent voice function.
It should be appreciated that the controls for the target settings displayed in the hover window typically include two aspects: first, information indicating a target setting item, such as a setting item name; and secondly, the control is used for being operated and set by a user. That is, the control of the target setting item may indicate either the target setting item or be operable. For example, the floating window 702 shown in (a) in fig. 7 includes a control of a voice wake-up setting item, where the control of the voice wake-up setting item includes a name of "voice wake-up" and a switch 705, and when the switch 705 is turned on, the voice wake-up setting item may be turned on, and when the switch 705 is turned off, the voice wake-up setting item may be turned off. That is, the switch 705 is a control for user operation setting.
In the smart tv 300, there may be various functions that need to be used after the corresponding settings are completed, and target setting items corresponding to different functions may be different.
Based on this, in some embodiments, the smart tv 300 may include a correspondence table of the above-mentioned multiple functions and call parameters of setting items required to be set for starting each function. The call parameter is used to instruct the smart tv 300 to obtain a setting item of a required setting. Therefore, the intelligent television 300 can conveniently find the calling parameters corresponding to the current function 1, and then call the target setting items required to be set for starting the function 1.
For example, taking the example that the call parameter is a setting item path, the corresponding relationship shown in the following table 1 may be recorded in the smart tv 300.
TABLE 1
Sequence number Function of Setting item paths
1 Intelligent voice setup/Smart Assistant/Smart Speech
2 Mobile phone control intelligent screen Setup/network and link/smart screen manipulation/linking
The 1 st data in table 1 above represents: the setup items required to set up the "Smart Voice" function are called down from the path "setup/Smart Assistant/Smart Voice". The data of item 2 in table 1 above represents: the setting items required to be set for starting the function of controlling the intelligent screen by the mobile phone are required to be called from the path setting/network and connection/intelligent screen control/link. If function 1 is the "intelligent voice" function, it can be queried from table 1 that the setup item path is "setup/intelligent assistant/intelligent voice", and then the intelligent tv 300 can call the target setup item of the "intelligent voice" function, such as voice wake-up item, voice feature recognition item, etc., from the path "setup/intelligent assistant/intelligent voice".
It should be understood that in the foregoing examples, the call parameter is mainly described as an example of setting the item path. In practice, the present invention is not limited thereto. For example, the call parameter may also be an interface parameter of the setting item.
Further, in other embodiments, the correspondence between various functions (such as function 1) and the interface (such as interface a) displaying the operation steps for starting the functions may be added on the basis of the correspondence table. Thus, the smart television 300 can conveniently and accurately determine the target setting item called by the smart television 300 in response to the event 1 when the interface a is displayed.
For example, the interface identifier may be added on the basis of table 1, to obtain the correspondence relationship shown in table 2 below.
TABLE 2
Sequence number Function of Interface identification Setting item paths
1 Intelligent voice Interface aaa setup/Smart Assistant/Smart Speech
2 Mobile phone control intelligent screen Interface bbb Setup/network and link/smart screen manipulation/linking
Unlike table 1, the 1 st piece of data in table 2 above may also represent: the interface aaa is an interface displaying an operation step of turning on the "smart voice" function; the 2 nd data in table 2 above may also represent: the interface bbb is an interface on which operation steps for turning on the "smart screen for mobile phone control" function are displayed. If the interface identifier of the current interface a is the interface aaa, the corresponding setup item path is "setup/intelligent assistant/intelligent voice" from the above table 2, and then the smart tv 300 may call the target setup item, such as the voice wake-up item, the voice feature recognition item, etc., from the path "setup/intelligent assistant/intelligent voice".
That is, the smart tv 300 may query the call parameter corresponding to the function 1 from the correspondence table in response to the event 1. For convenience of explanation, the call parameter corresponding to the function 1 may also be referred to as a first call parameter. And then, the smart television 300 may call the target setting item according to the queried call parameter, and then display the control of the target setting item in the floating window.
In some embodiments, only the controls of the target setting items required to be set for opening the function 1 may be included in the floating window. For example, the settings required to turn on the "smart voice" function are only voice wake up settings and voice feature recognition settings. Accordingly, the smart tv 300 may display only the control of the voice wake setting item and the control of the voice feature recognition setting item in the floating window in response to the event 1. Therefore, the control of the target setting item can be accurately provided for the user, and erroneous setting is avoided. Thereby being beneficial to further improving the efficiency of man-machine interaction.
In other embodiments, the floating window not only includes a control for opening the target setting item required to be set by the function 1, but also includes a control for setting items under the same setting menu as the target setting item. For example, the setting items required for the "smart voice" function are only voice wake-up setting items and voice feature recognition setting items. Simultaneously, the dialect recognition setting item and the continuous dialogue setting item are both under the intelligent voice setting menu together with the voice wake-up setting item and the voice feature recognition setting item. Accordingly, the smart tv 300 may display a control of the dialect recognition setting item and a control of the continuous dialog setting item on the basis of displaying a control of the voice wake setting item and a control of the voice feature recognition setting item in the floating window in response to the event 1. The smart television 300 can facilitate the user to fully recognize the setting menu where the target setting item is located by providing the user with controls for all setting items under the same setting menu. Thereby facilitating the user to actively search and set the target setting item from the system setting later.
Also, the floating window may only partially cover the interface a. For example, in the example shown in (a) of fig. 7, a floating window 702 is displayed at a position on the upper right of the interface 701, wherein the interface 701 is the interface a. It should be understood that the position of the floating window shown in fig. 7 (a) is not limited in practice. The smart television 300 may flexibly determine the display position of the floating window according to the display position of the operation instruction information in the interface a, so as to avoid the shielding of the operation instruction information by the floating window. For example, in the interface 703 shown in (b) of fig. 7, a floating window 704 is displayed at a position at the upper left of the interface 703, thereby avoiding shielding of the lower right operation instruction information. Wherein interface 703 is interface a.
Further, after displaying the floating window, the smart tv 300 may receive a moving operation of the floating window by the user. The moving operation may be a drag operation of the floating window by the user's finger, or a long-press operation of the direction keys (e.g., the up key 431, the right key 432, the down key 433, and the left key 434 in fig. 4) of the remote controller 400 after moving the movable cursor to the floating window, or the like. The smart tv 300 moves the position of the floating window in response to the moving operation. Therefore, the position of the suspension window can be flexibly adjusted, and shielding of operation instruction information is avoided.
In this embodiment of the present application, a floating window is displayed in the interface a, and since the interface a includes operation instruction information of a target setting item required to be set for implementing the function 1, and the floating window includes the target setting item, a setting control and a setting reference may be included in a display screen of the smart television 300 at the same time. Accordingly, the user can complete the setting with reference to the operation instruction information in the interface a. Thus, the setting can be completed quickly in the interface a without the need to jump back between the interface a and the interface set by the system.
In the above description about S503, the smart tv 300 displays a floating window in the interface a in response to the event 1. In practice, the user actively enters the system setting or through the floating window in the embodiment of the present application, the opening or closing of the target setting item may be completed, so that the state of the target setting item may be changed. Wherein the state of the target setting item includes an open state or a closed state.
Illustratively, taking the example that function 1 is a "smart voice" function, and the target setting items of the "smart voice" function include a voice wake-up setting item and a voice feature recognition setting item, the voice wake-up setting item and the voice feature recognition setting item are normally turned on by default. That is, if the user does not turn off the voice wake setting item and the voice feature recognition setting item, both the voice wake setting item and the voice feature recognition setting item are in an on state. If the user closes the voice wake-up setting item and the voice feature recognition setting item, the voice wake-up setting item and the voice feature recognition setting item are both in the closed state. If the target setting item of function 1 is already in the on state, the target setting item does not need to be displayed again for the user to set. For example, in the case where the voice wake setting item and the voice feature recognition setting item are in an on state, the smart tv 300 may display an interface 801 shown in fig. 8 if a floating window is displayed in the interface a in response to the event 1. The interface 801 includes a floating window 802, and both the voice wake setting and the voice feature recognition setting in the floating window 802 are in an open state. Obviously, a floating window is displayed at this time, but displaying the target setting item is meaningless and does not require the user to perform the setting operation. That is, the target setting item does not need to be displayed.
Based on this, in order to avoid displaying the target setting item again in the case where the target setting item has been turned on, the smart tv 300 may record the states of the target setting items of various functions. For example, the smart tv 300 may record the correspondence as shown in table 3 below.
TABLE 3 Table 3
Figure BDA0003384106940000141
In some embodiments, S503 further comprises: in response to the event 1, if the target setting item of the function 1 is in the closed state, the smart television 300 displays a floating window in the interface a, wherein the floating window comprises a control for realizing the target setting item required to be set by the function 1, and the state of the control for the target setting item is consistent with the actual state of the control. Wherein the target setting item is in a closed state, comprising: all target setting items are in a closed state, or part of the target setting items are in a closed state. That is, at least one item setting item is in a closed state. As such, the smart tv 300 may display the target setting item for the user to set only when it has an unopened target setting item. Thereby improving the efficiency of man-machine interaction.
Illustratively, taking function 1 as the "intelligent voice" function, the target setting is a voice wake setting and a voice feature recognition setting as examples. The smart tv 300 displays a floating window and displays a target setting item in the floating window if at least one of the voice wakeup setting item and the voice feature identification setting item is detected to be in a closed state in response to the event 1. If it is detected that the voice wake setting item and the voice feature recognition setting item are both in the off state, a floating window 702 as shown in (a) of fig. 7 may be displayed, and the voice wake setting item and the voice feature recognition setting item in the floating window 702 are both off. If it is detected that the voice wake setting item is in an on state and the voice feature recognition setting item is in an off state, a floating window 901 as shown in (a) of fig. 9 may be displayed, the voice wake setting item in the floating window 901 being on and the voice feature recognition setting item being off. If it is detected that the voice wake setting item is in an off state and the voice feature recognition setting item is in an on state, a floating window 902 as shown in (b) of fig. 9 may be displayed, the voice wake setting item in the floating window 902 being off and the voice feature recognition setting item being on.
In other embodiments, the method further comprises: in response to event 1, if the target setting item of function 1 is in the on state, the smart tv 300 does not need to display the target setting item again. Wherein the target setting item is in an on state comprising: all target setting items are in an on state. For this case, the smart tv 300 may perform S507 so that the function 1 may be directly used.
S504, the intelligent television 300 receives an event 2, wherein the event 2 is used for triggering the intelligent television 300 to set a target setting item.
In the embodiment of the present application, for convenience of explanation, the event 2 may be referred to as a second event.
The event 2 may be a signal sent by the user to the smart tv 300 to turn on the target setting item by manipulating the remote controller 400. Alternatively, event 2 may be a voice 2 or a somatosensory action (such as a gesture) 2 that turns on the target setting item. Alternatively, event 2 may be a touch operation of a switch control of the target setting item by the user. The embodiment of the present application is not particularly limited thereto.
S505, the intelligent television 300 responds to the event 2 to finish setting the target setting item.
Illustratively, taking function 1 as the "intelligent voice" function, the target setting is a voice wake setting and a voice feature recognition setting as examples. After the smart tv 300 turns on the voice wake setting and the voice feature recognition setting in response to the event 2, the smart tv 300 may display a floating window 802 in the interface 801 as shown in fig. 8, that is, the voice wake setting and the voice feature recognition setting have both been turned on.
Through the steps of S501-S505, the setting of the target setting item can be simply and quickly completed, and the function 1 can be used subsequently. In some embodiments, after the setting of the target setting item is completed, the usage function 1 may be quickly entered through the following procedures of S506 to S507.
S506, after the setting of the target setting item is completed, the intelligent television 300 receives an event 1, wherein the event 1 is used for triggering the intelligent television 300 to use the function 1.
For event 1, reference may be made to the description in S502, which is not repeated here.
In some embodiments, after detecting that the user completes setting the target setting item, the mobile phone may display the prompt information 1. The prompt message 1 is used to prompt the smart tv 300 to trigger the manner of using the function 1, such as prompting to select "immediate experience" again. For example, the prompt message 1 is a prompt 1003 in the interface 1001 shown in fig. 10, after the target setting item of the "smart voice" function is started, the smart tv 300 may display the interface 1001 shown in fig. 10, where the interface 1001 includes the prompt 1003, and the specific content of the prompt 1003 is "select" where the "smart voice" function may be entered. Wherein here is referred to as the "experience immediately" button, option 1. Therefore, after the setting of the target setting item is completed, how to quickly enter the use of intelligent voice can be prompted.
Note that, for the case where event 1 is a signal of selecting option 1 transmitted to the smart tv 300 by the user operating the remote controller 400, the difference from S502 is that: after the target setting item is opened in S505, the active cursor is stopped on the floating window, such as the target setting item. Based on this, in the present embodiment, before S506, the smart tv 300 may move the active cursor from the floating window to option 1. After opening the target setting item, the smart tv 300 may receive event 3, and close the floating window in response to event 3. Wherein, for convenience of explanation, event 3 may also be referred to as a third event. For example, event 3 may be an event of clicking a close button on the hover window, or event 3 may be an event of inputting a voice command to close the hover window, or event 3 may be an event of a user operating a return key on remote control 400, causing remote control 400 to send a signal to smart tv 300 to close the hover window (i.e., return to interface a). In response to event 3, the smart television 300 may close the floating window.
Illustratively, after opening the target setting item, the smart tv 300 may display an interface 1001 shown in fig. 10, where the interface 1001 includes a floating window 1002, and the active cursor is on the floating window 1002, such as the voice feature recognition setting item. Then, the smart tv 300 may display an interface 601 as shown in (a) of fig. 6A in response to a signal transmitted to the smart tv 300 by the user operating a return key on the remote controller 400. That is, the floating window has been closed by the smart tv 300. After closing the hover window, the user may operate the directional keys (e.g., left key 434, right key 432, etc. in fig. 4) on remote control 400, causing remote control 400 to send a signal to smart television 300 to move the active cursor onto option 1. The smart television 300 may move the active cursor onto option 1 in response to a signal to move the active cursor onto option 1. Thus, the smart television 300 may further complete the operation of selecting option 1 in the case that there is no floating window in the interface a. So that accurate selection of option 1 may be facilitated.
S507, the intelligent television 300 responds to the event 1 and displays an interface b, wherein the interface b is an interface for realizing the function 1.
In the embodiment of the present application, for convenience of explanation, the interface b may be referred to as a third interface.
Illustratively, taking function 1 as the "intelligent voice" function, the target setting is a voice wake setting and a voice feature recognition setting as examples. In response to event 1, if it is detected that both the voice wake setting item and the voice feature recognition setting item are in an on state, the smart tv 300 may display an interface 1101 shown in (a) of fig. 11. Included in interface 1101 is prompt 1102, e.g., please say "hello YOYO" to me, for prompting the user to wake up the voice assistant yoyoyo. After the user inputs the voice of "you'll YOYO," as shown in (b) of fig. 11, the smart tv 300 may display a prompt 1103, e.g., hi, i am listening, in the interface 1101. After that, the smart tv 300 may complete the corresponding instruction according to the voice message input by the user. Thereby realizing the intelligent voice function. That is, the interface 1101 is an interface b.
In summary, by adopting the method of the embodiment of the present application, after displaying the operation instruction information of the target setting item of the start function 1, the smart tv 300 may respond to the event 1, and may compare and display the operation instruction information and the control of the target setting item in the form of a floating window. In this way, the smart television 300 can provide the setting reference and the setting control to the user at the same time, which is beneficial to the comparison setting. Therefore, the efficiency of man-machine interaction can be improved.
And in a second mode, a split screen mode is adopted.
Fig. 12 is a flowchart of a method for setting an opening function according to an embodiment of the present application. As shown in fig. 12, in the present embodiment, the setting method of the opening function may include:
s1201, the smart tv 300 displays an interface a, where the interface a includes an operation step of turning on the function 1, and the operation step includes operation instruction information. Wherein the target setting item is a setting item required to be set for starting the function 1. The operation instruction information is used to describe the manner in which the target setting item is operated to turn on the function 1.
S1202, the smart tv 300 receives an event 1, where the event 1 is used to trigger the smart tv 300 to use the function 1.
Regarding S1201-S1202, the principles and specific implementation are the same as those of S501-S502, and the descriptions of S501-S502 are referred to, and are not repeated here.
S1203, in response to the event 1, the smart tv 300 displays an interface a and an interface c on a screen of the smart tv 300, where the interface c includes a control of the target setting item.
In the embodiment of the present application, for convenience of explanation, the whole including the interface a and the interface c may be referred to as a second interface. The interface c comprises a target setting item, namely a control for displaying operation description information and the target setting item in a split screen mode.
Illustratively, taking function 1 as the "intelligent voice" function, interface a is interface 601 shown in fig. 6A (a). The smart tv 300 may split-screen display an interface 1301 and an interface 1302 as shown in (a) of fig. 13 in a screen in response to event 1. The interface 1301 includes operation instruction information, that is, the interface 1301 is the interface a. Controls for target settings are included in interface 1302, such as controls for settings for voice wakeup, voice feature recognition, and the like. That is, interface 1302 is interface c.
Similar to the embodiment of form one (i.e., the floating window form), the control of the target setting item displayed in interface c also includes two aspects: first, information indicating a target setting item, such as a setting item name; and secondly, the control is used for being operated and set by a user. That is, the control of the target setting item may indicate either the target setting item or be operable. .
It should be understood that the relative positions of interface a and interface c are not limited to the relative positions shown in fig. 13 (a). In actual implementation, the interface a and the interface c can be displayed in the display screen in an up-and-down split mode, and can also be displayed in the display screen in a left-and-right split mode. And if the screen is split left and right, the interface a can be left, the interface c can be right, and the interface c can be left and the interface a can be right; if the screen is split up and down, the interface a may be up, the interface c may be down, or the interface c may be up, and the interface a may be down.
The interface a and the interface c may be divided into two halves, i.e. the display screen of the smart tv 300 is divided into two halves. For example, the interface 1301 and the interface 1302 shown in (a) in fig. 13 halve the display screen of the smart tv 300 left and right. Where interface 1301 is interface a and interface 1302 is interface c.
Alternatively, the display size of the interface a and the display size of the interface c may not be equal. Wherein the display size of interface a may be set to be larger than the display size of interface c. In this way, occlusion of the operation instruction information in the interface a can be reduced. For example, the smart tv 300 may display the interface 1305 and the interface 1306 shown in (c) in fig. 13, the interface 1305 being the interface a, the interface 1306 being the interface c. The display size of the interface 1305 is significantly larger than that of the interface 1306, so that incomplete display of operation instruction information in the interface 1306 due to an excessively large display area of the interface 1305 can be avoided.
In some embodiments, the smart tv 300 may employ a screen adaptation technique in response to the event 1, and make layout adjustment of the content in the interface a (particularly, the content including the operation instruction information) according to the display size of the interface a. The self-adaptive adjustment comprises the processing of font size adjustment, automatic line feed, line spacing adjustment and the like. And then, the smart television 300 can display the interface a and the interface c in a split screen mode, so that the content in the interface a can be displayed in a partial area of the screen during full-screen display. Thus, the occlusion of the content in the interface a by the interface c can be avoided. For example, taking an example that the interface a displayed in full screen is the interface 601 shown in (a) in fig. 6A, if the layout of the interface a is not adjusted, the smart tv 300 may display the interface 1301 and the interface 1302 displayed in split screen shown in (a) in fig. 13, where the interface 1301 is the interface a, and the interface 1302 is the interface c, in response to the event 1; in response to event 1, if text in interface a is automatically line-fed according to the display size of interface a during the split display, the smart tv 300 may display the split-screen displayed interface 1303 and interface 1304 shown in fig. 13 (b). The interface 1303 is an interface a, the interface 1304 is an interface c, and after automatic line feed, the text in the interface 1303 can be completely displayed without being blocked by the interface 1304.
In some embodiments, after the interface a and the interface c are displayed on the split screen, the smart tv 300 may receive an adjustment operation of the display size of the interface a and the display size of the interface c by the user. The adjustment operation may be a drag operation of the user on the critical areas of the interface a and the interface c, and the method is mainly applicable to a display screen supporting finger touch control. Alternatively, the adjustment operation may be an operation of long-pressing a direction key (such as the up key 431, the right key 432, the down key 433, and the left key 434 in fig. 4) of the remote controller 400, or the like. In response to the adjustment operation, the smart tv 300 may adjust the display size of the interface a and the display size of the interface c. Therefore, the user can conveniently check the operation instruction information and/or perform setting operation, and the efficiency of man-machine interaction is further improved.
In the screen of the smart tv 300, the interface a and the interface c are displayed in a split screen mode, and since the interface a includes operation instruction information and the interface c includes a target setting item, the display screen of the smart tv 300 may include both setting content and setting guidance. Accordingly, the user can complete the setting in the interface c with reference to the operation instruction information in the interface a. Therefore, the setting can be completed quickly in the interface c displayed in a split screen with the interface a, and the jump between the interface a and the interface set by the system is not needed.
Note that in the foregoing description about S1203, the differences from the foregoing S503 are mainly described. The rest of the parts which are not described in detail can be seen from the description of S503, but only the floating window displayed in the interface a in S503 is replaced by the split screen display interface a and the split screen display interface c, which are not described herein. For example, in response to event 1, the smart tv 300 displays the interface a and the interface c on a split screen if the target setting item of function 1 is in the closed state. As another example, the smart tv 300 may call the target setting item through the correspondence as shown in table 1.
S1204, the smart tv 300 receives an event 2, where the event 2 is used to trigger the smart tv 300 to set a target setting item.
And S1205, the intelligent television 300 responds to the event 2 to finish setting the target setting item.
Illustratively, taking function 1 as the "intelligent voice" function, the target setting is a voice wake setting and a voice feature recognition setting as examples. After the smart tv 300 turns on the voice wakeup setting item and the voice feature recognition setting item in response to the event 2, the smart tv 300 may display the interface 1401 and the interface 1402 as shown in (a) of fig. 14, that is, the voice wakeup setting item and the voice feature recognition setting item have both been turned on.
Regarding S1204-S1205, the same principle and specific implementation as those of S504-S505 above can be referred to the description of S501-S502 above, and the description thereof will not be repeated here.
S1206, after completing the setting of the target setting item, the smart tv 300 receives an event 1, where the event 1 is used to trigger the smart tv 300 to use the function 1.
Regarding S1206, the principle and specific implementation are the same as those of S506, and the description of S506 is referred to above, and will not be repeated here.
Note that, for the case where event 1 is a signal of selecting option 1 transmitted to the smart tv 300 by the user operating the remote controller 400, the difference from S506 is that: after the target setting item is opened in S1206, the active cursor is stopped in the interface c, such as on the target setting item. Based on this, in the present embodiment, before S1206, the smart tv 300 may move the active cursor from the interface c onto the interface a. I.e. to a split screen displaying the operational steps. In a specific implementation, after completing the setting of the target setting item, the user may operate the direction keys on the remote controller 400, such as the up key 431, the right key 432, the down key 433 and the left key 434 in fig. 4, so that the remote controller 400 sends a signal for moving the active cursor to the smart tv 300. The smart tv 300 may move the active cursor to the interface a, such as option 1, in response to the signal to move the cursor. Illustratively, taking the example of the interface 1401 and the interface 1402 shown in (a) in fig. 14 as the display of the smart tv 300 after the target setting item is opened. As shown in fig. 14 (a), the currently active cursor is on the interface 1402 (e.g., on a voice wakeup settings item). As shown in fig. 14 (b), when the user presses the left key 434 of the remote controller 400 in fig. 4, the active cursor can be moved to the "experience immediately" button shown in fig. 14 (c), i.e., option 1. Thus, the smart tv 300 may quickly select option 1.
S1207, the smart tv 300 displays an interface b, which is an interface for realizing the function 1, in response to the event 1.
Regarding S1207, the principle and specific implementation thereof are the same as those of S507, and the description of S507 is referred to before, and will not be repeated here.
In summary, by adopting the method of the embodiment of the present application, after displaying the operation instruction information of the target setting item of the setting function 1, the smart tv 300 may respond to the event 1, and may compare and display the operation instruction information and the control of the target setting item in a split screen manner. In this way, the smart television 300 can provide the setting reference and the setting control to the user at the same time, which is beneficial to the comparison setting. Therefore, the efficiency of man-machine interaction can be improved.
In the scenario illustrated in the above-described machine-playing skill application, various functions that the intelligent television 300 may provide may be introduced in the machine-playing skill application. For example, the player skill application may introduce the interface operation function and the intelligent voice function under the level one menu "quick up" in the interface 103 shown in fig. 1 (b), and may also introduce the functions under other level one menus. For example, a movie and television viewing free function and a cross-device browsing album function under the level menu "video entertainment" in the interface 1501 shown in fig. 15 may be introduced.
Among the functions described in the player skill application, some functions (e.g., function 1) need to be started after corresponding system settings are completed in advance. Accordingly, in the menu interface of the play and skill application, such as the interface 103 shown in (b) of fig. 1, the interface 1501 shown in fig. 15, and the like, the smart tv 300 may introduce the function 1 in the form of video, text, and the like in response to the user's selection operation of the function 1, and display the interface a during or after the introduction, where the interface a includes operation instruction information to explicitly prompt the setting item to be set.
While other functions (denoted as function 2) can be turned on without having to complete the corresponding system settings in advance. Wherein, for convenience of explanation, function 2 may also be referred to as a second function. The function 2 may be a function that can be started without completing the corresponding application settings and system settings in advance. For example, function 2 is a movie and television viewing free function. Alternatively, the function 2 may be a function that needs to complete the corresponding application settings in advance, but does not need to complete the corresponding system settings in advance. By way of example, taking the movie and television integrated video free watching function shown in fig. 15 as an example, the user only needs to select video watching in a free area under each channel (e.g., television play, movie) of the smart tv 300, without having to complete any setting in advance. Accordingly, in the menu interface of the play and skill application, the smart tv 300 responds to the user's selection operation of the function 2, and may also introduce the function 1 in the form of video, text, etc. However, the interface a is not displayed during or at the end of the introduction, but the interface d is displayed, and the operation steps of the start function 2 are included in the interface d. For convenience of explanation, the interface d may also be referred to as a fifth interface. Unlike interface a is: the information of the setting items required to be set for starting the function 2 is not included in the operation steps in the interface d. For example, still taking the video free watching function shown in fig. 15 as an example, the smart tv 300 may introduce the function in response to the user selecting the video free watching function, and after the introduction, may display an interface 1601 shown in fig. 16 (a), where the interface 1601 includes an operation step "01 mode one: first page, television show, movie, documentary … … ", the operation step does not include information such as setting item name, path, etc. That is, the interface 1601 is an interface d.
That is, in an application interface of the player skill application, such as the interface 103 shown in (b) of fig. 1, and the interface 1501 shown in fig. 15, a plurality of options may be provided, which correspond one-to-one to a plurality of functions. Wherein, for convenience of explanation, the application interface may also be referred to as a fourth interface. The smart tv 300 may display the interface a in response to a user's selection operation of the option of function 1 in the application interface. The smart tv 300 may display the interface d in response to a user's selection operation of the option of the function 2 in the application interface. Here, for convenience of explanation, the option of the function 1 may be referred to as a second option, and the option of the function 2 may be referred to as a third option.
Also, for function 2 described above, in some embodiments, during the process of displaying interface d, smart tv 300 cannot receive and respond to event 1, e.g., option 1 is not included in interface d, and thus cannot receive the event that the user selects option 1. Thereby making it possible for the user to clearly distinguish between the differences of function 1 and function 2.
In other embodiments, during the display of interface d, smart television 300 may receive event 1. The smart tv 300 may display an interface e, which is an interface implementing the function 2, in response to the event 1. For convenience of explanation, the interface e may be referred to as a sixth interface. Illustratively, interface d is interface 1601 shown in (a) of 16, and in interface 1601, option 1 is an "immediate experience" button 1602. In the process of displaying the interface d, the smart tv 300 may display the interface 1603 shown in (b) of fig. 16 in response to a user's selection operation of the "immediate experience" button 1602, and in the interface 1603, the user may watch a free movie and television ensemble in a free zone or by selecting all free options. That is, interface 1603 is interface e. In this way, the smart tv 300 can quickly jump to the interface implementing function 2.
Further, if the target setting items of the function 1 are all opened, the function 1 may be understood as a function that can be opened without completing the corresponding system setting in advance. Based on this, in some embodiments, for the function 1, if at least one item of the target setting item of the function 1 is not yet turned on, the smart tv 300 responds to the user's selection operation on the function 1 to introduce the function 1 in a video, text, or the like, and displays the interface a during or after the introduction. If the target setting items of the function 1 are not opened, the smart tv 300 responds to the user selection operation of the function 1, and introduces the function 1 in the form of video, text, etc., and displays the interface f during or after the introduction. The interface f includes an operation step of opening the function 2, but does not include operation instruction information of the target setting item. Thus, in the case that the target setting item has been opened, it is possible to avoid erroneously guiding the user to open the target setting item again. Subsequently, the smart television 300 may receive an event, such as selecting "immediate experience". In response to event 1, the handset may display interface b, which is the interface that implements function 1, directly.
Other embodiments of the present application provide an electronic device (such as the smart tv 300) that may include: the display screen (e.g., touch screen), memory, and one or more processors. The display, memory, and processor are coupled. The memory is for storing computer program code, the computer program code comprising computer instructions. When the processor executes the computer instructions, the electronic device may perform the various functions or steps performed by the intelligent television 300 in the method embodiments described above.
Embodiments of the present application also provide a chip system, as shown in fig. 17, the chip system 1700 includes at least one processor 1701 and at least one interface circuit 1702. The processor 1701 and the interface circuit 1702 may be interconnected by wires. For example, the interface circuit 1702 may be used to receive signals from other devices (e.g., a memory of an electronic apparatus). For another example, the interface circuit 1702 may be used to send signals to other devices, such as the processor 1701. The interface circuit 1702 may, for example, read instructions stored in a memory and send the instructions to the processor 1701. The instructions, when executed by the processor 1701, may cause the electronic device to perform the various steps described in the embodiments above. Of course, the chip system may also include other discrete devices, which are not specifically limited in this embodiment of the present application.
Embodiments of the present application also provide a computer storage medium including computer instructions that, when executed on an electronic device (e.g., the smart television 300) described above, cause the electronic device to perform the functions or steps of the method embodiments described above.
Embodiments of the present application also provide a computer program product, which when run on a computer, causes the computer to perform the functions or steps performed by the electronic device in the method embodiments described above.
It will be apparent to those skilled in the art from this description that, for convenience and brevity of description, only the above-described division of the functional modules is illustrated, and in practical application, the above-described functional allocation may be performed by different functional modules according to needs, i.e. the internal structure of the apparatus is divided into different functional modules to perform all or part of the functions described above.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another apparatus, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or a plurality of physical units, may be located in one place, or may be distributed in a plurality of different places. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a readable storage medium. Based on such understanding, the technical solution of the embodiments of the present application may be essentially or a part contributing to the prior art or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, including several instructions for causing a device (may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing is merely a specific embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (14)

1. A setting method of an opening function, which is applied to an electronic device, the method comprising:
the electronic equipment displays a first interface, wherein the first interface comprises operation instruction information for a target setting item, and the operation instruction information is used for describing a mode of operating the target setting item to start a first function in the electronic equipment;
the electronic equipment responds to the first event and displays a second interface; the second interface comprises the first interface and a floating window displayed on the first interface, and the control of the target setting item is displayed in the floating window; or the second interface is a split screen interface, one interface of the second interface is the first interface, and the other interface of the second interface displays the control of the target setting item;
and the electronic equipment responds to a second event of the control of the target setting item to finish setting the target setting item.
2. The method according to claim 1, wherein the electronic device stores a correspondence relationship between a plurality of functions in the electronic device and call parameters of setting items supporting the electronic device to realize the functions; the plurality of functions includes the first function;
the electronic device displays a second interface in response to the first event, comprising:
the electronic equipment responds to the first event, searches the first function from the corresponding relation, and searches a first calling parameter corresponding to the first function;
and the electronic equipment calls the target setting item according to the first calling parameter, and displays a control of the target setting item in the second interface.
3. The method of claim 1 or 2, wherein the electronic device displaying a second interface in response to the first event comprises:
and if the target setting item is in a closed state, the electronic equipment responds to the first event and displays the second interface.
4. A method according to claim 3, characterized in that the method further comprises:
if the target setting item is in an open state, the electronic equipment responds to the first event and displays a third interface; the third interface is an interface that implements the first function.
5. The method of claim 4, wherein the target setting has a plurality of items, the target setting being in a closed state, comprising: at least one of the target setting items is in a closed state; the target setting item is in an open state, and comprises: all the target setting items are in an open state.
6. The method of any of claims 1-5, wherein the second interface comprises the first interface and a hover window displayed on the first interface in which a control of the target setting item is displayed;
after the setting of the target setting item is completed, the method further includes:
the electronic equipment responds to a third event and closes the floating window;
the electronic equipment responds to the first event and displays a third interface, wherein the third interface is an interface for realizing the first function.
7. The method of claim 6, wherein the electronic device is a television, and wherein the third event comprises: and the remote controller of the television sends a control command returned to the first interface to the television.
8. The method of any one of claims 1-5, wherein the second interface is a split screen interface, one interface of the second interface is the first interface, and a control of the target setting item is displayed in another interface of the second interface; and, the electronic device is a television;
After the setting of the target setting item is completed, the method further includes:
the television responds to a control command for moving an active cursor sent by a remote controller of the television, and moves the active cursor to the first interface for displaying the operation instruction information in the second interface;
after the television moves the movable cursor into the first interface for displaying the operation instruction information in the second interface, the electronic equipment responds to the first event and displays a third interface; the third interface is an interface that implements the first function.
9. The method of any of claims 1-8, wherein a first option is included in the first interface, the first event comprising an event that selected the first option; alternatively, the first event includes an event of a user input preset gesture; alternatively, the first event includes an event in which a user inputs a preset voice.
10. The method of claim 9, wherein the electronic device is a television, and wherein the event of the user selecting the first option comprises: and the remote controller of the television sends a control command for selecting the first option to the television.
11. The method of any of claims 1-10, wherein prior to the electronic device displaying the first interface, the method further comprises:
the electronic equipment displays a fourth interface, wherein the fourth interface comprises a plurality of options, and the options are in one-to-one correspondence with a plurality of functions of the electronic equipment;
the electronic device displays a first interface, including:
the electronic equipment responds to the selection operation of the second option, and the first interface is displayed; the second option is one of the plurality of options, the second option corresponds to a first function, and the target setting item supporting the electronic device to realize the first function is a system setting item.
12. The method of claim 11, wherein the method further comprises:
the electronic equipment responds to the selection operation of the third option, and a fifth interface is displayed; the third option is one of the plurality of options, the third option corresponds to a second function, and a target setting item supporting the electronic device to realize the second function is an application setting item; the fifth interface does not include information of a target setting item supporting the electronic device to realize the second function;
The electronic equipment responds to the first event and displays a sixth interface, wherein the sixth interface is an interface for realizing the second function.
13. An electronic device comprising a display screen, a memory, and one or more processors; the display screen, the memory and the processor are coupled; the memory is for storing computer program code comprising computer instructions which, when executed by the processor, cause the electronic device to perform the method of any of claims 1-12.
14. A computer readable storage medium having instructions stored therein, which when run on an electronic device, cause the electronic device to perform the method of any of claims 1-12.
CN202111443430.7A 2021-11-30 2021-11-30 Method for setting starting function and electronic equipment Pending CN116208814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111443430.7A CN116208814A (en) 2021-11-30 2021-11-30 Method for setting starting function and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111443430.7A CN116208814A (en) 2021-11-30 2021-11-30 Method for setting starting function and electronic equipment

Publications (1)

Publication Number Publication Date
CN116208814A true CN116208814A (en) 2023-06-02

Family

ID=86511640

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111443430.7A Pending CN116208814A (en) 2021-11-30 2021-11-30 Method for setting starting function and electronic equipment

Country Status (1)

Country Link
CN (1) CN116208814A (en)

Similar Documents

Publication Publication Date Title
WO2016066092A1 (en) Multimedia playing controlling method and device, and storage medium
EP3136793A1 (en) Method and apparatus for awakening electronic device
JP6196398B2 (en) Apparatus, method, terminal device, program, and recording medium for realizing touch button and fingerprint authentication
CN105487658A (en) Application operation control method and apparatus
CN110908582A (en) Control method, touch control pen and electronic assembly
JP2018508086A (en) Input processing method, apparatus and device
CN104978200A (en) Application program display method and device
CN103914148A (en) Function interface display method and device and terminal equipment
CN104378267A (en) Hinting method and device of equipment connecting
CN106453032B (en) Information-pushing method and device, system
US20190129517A1 (en) Remote control by way of sequences of keyboard codes
CN104216973A (en) Data search method and data search device
AU2022203591A1 (en) Method and apparatus for switching display interface, and electronic device
KR20180076830A (en) Audio device and method for controlling the same
CN111078113A (en) Sidebar editing method, mobile terminal and computer-readable storage medium
CN113050863A (en) Page switching method and device, storage medium and electronic equipment
CN107948756B (en) Video synthesis control method and device and corresponding terminal
KR20210120589A (en) Electronic device and method for sharing screens
WO2022237669A1 (en) Display control method and apparatus
CN116208814A (en) Method for setting starting function and electronic equipment
US11756545B2 (en) Method and device for controlling operation mode of terminal device, and medium
CN111667827B (en) Voice control method and device for application program and storage medium
KR102198175B1 (en) Method and apparatus for processing key pad input received on touch screen of mobile terminal
CN107835310B (en) Mobile terminal setting method and mobile terminal
WO2018137306A1 (en) Method and device for triggering speech function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination