CN111176517A - Method and device for setting scene and mobile phone - Google Patents

Method and device for setting scene and mobile phone Download PDF

Info

Publication number
CN111176517A
CN111176517A CN201911403191.5A CN201911403191A CN111176517A CN 111176517 A CN111176517 A CN 111176517A CN 201911403191 A CN201911403191 A CN 201911403191A CN 111176517 A CN111176517 A CN 111176517A
Authority
CN
China
Prior art keywords
scene
user
setting
displaying
information list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911403191.5A
Other languages
Chinese (zh)
Inventor
古滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN201911403191.5A priority Critical patent/CN111176517A/en
Publication of CN111176517A publication Critical patent/CN111176517A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2807Exchanging configuration information on appliance services in a home automation network

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application relates to the technical field of Internet of things, and discloses a method for setting a scene, which comprises the following steps: displaying a scene option and a position label corresponding to the scene option; after a user selects a scene and a position label and submits the scene and the position label, displaying a scene information list; and after the user selects one or more scenes from the scene information list and confirms the setting, adding the one or more scenes into a scene library defined by the user to complete the scene setting. The method can display the scene information list according to the scene options and the position tags corresponding to the scene options, is convenient for a user to select a required scene from the scene information list and select a corresponding position tag, and can quickly add one or more scenes into a user-defined scene library, so that the interface operation is simple, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved. The application also discloses a device and a mobile phone for setting the scene.

Description

Method and device for setting scene and mobile phone
Technical Field
The application relates to the technical field of internet of things, for example, to a method and device for setting a scene and a mobile phone.
Background
With the gradual progress of smart home, new household appliance control modes are continuously changed, and at present, the household appliances are controlled by setting scenes more and more generally. The user sets up domestic appliance's operation scene at intelligent terminal to trigger domestic appliances such as air purifier, air conditioner, intelligent audio amplifier, electric water heater, LED lamp and work under the preset condition, make all kinds of domestic appliances independently operate and set for the scene. For different household electrical appliances, a user performs a scene change operation process on a mobile terminal such as a smart phone, so that information streams of a scene flow and a label flow are mutually crossed, and the interface needs to jump back and forth when setting a scene.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art: in the existing scene setting method, the interface jumps back and forth due to the addition of a new label in one scene, the user experience is disordered easily due to repeated operation, the user experience is poor, and the traffic waste is large.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method and a device for setting a scene and a mobile phone, so as to solve the technical problem of how to set the scene more conveniently.
In some embodiments, the method for scene setting includes: displaying a scene option and a position label corresponding to the scene option; after a user selects a scene and a position label and submits the scene and the position label, displaying a scene information list; and after the user selects one or more scenes from the scene information list and confirms the setting, adding the one or more scenes into a scene library defined by the user to complete the scene setting.
In some embodiments, the apparatus for scene setting includes a processor and a memory storing program instructions, the processor being configured to execute the method for scene setting described above when executing the program instructions.
In some embodiments, the mobile phone comprises the above-mentioned device for setting scenes.
The method, the device and the mobile phone for setting the scene provided by the embodiment of the disclosure can achieve the following technical effects: the scene information list can be displayed according to the scene options and the position tags corresponding to the scene options, a user can conveniently select needed scenes from the scene information list and select corresponding position tags, one or more scenes can be rapidly added into a user-defined scene library, so that the interface operation is simple, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
fig. 1 is a schematic diagram of a method for setting a scene according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of another method for setting a scene provided by an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another method for setting a scene according to an embodiment of the present disclosure;
fig. 4 is a schematic diagram of an apparatus for setting a scene according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
With reference to fig. 1, an embodiment of the present disclosure provides a method for setting a scene, including:
s101, displaying scene options and position labels corresponding to the scene options;
s102, after a user selects a scene and a position label and submits the scene and the position label, displaying a scene information list;
s103, after the user selects one or more scenes from the scene information list and confirms the setting, adding the one or more scenes into a user-defined scene library to complete the scene setting.
By adopting the method for setting the scene, the scene information list can be displayed according to the scene options and the position tags corresponding to the scene options, a user can conveniently select a required scene from the scene information list and select corresponding position tags, one or more scenes can be rapidly added into a user-defined scene library, so that the interface operation is simple, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved.
Optionally, displaying the position tag corresponding to the scene option includes: when the number of the position labels reaches a first set condition, namely exceeds a first set threshold value, the position labels are displayed by sliding. Alternatively, the first set threshold may be 4, that is, only a maximum of 4 position tags are displayed per screen, and redundant position tags may be displayed by sliding the position tags left and right. Alternatively, if there is no location tag, the location tag is not displayed.
Optionally, the scene information list is displayed in a floating layer form. Specifically, the scene information list corresponding to the position tag is transversely arranged, expanded and displayed in the floating layer.
Optionally, the scene information list includes a scene selected by the user and corresponding location information, and the location information corresponds to the location tag.
In some embodiments, as shown in conjunction with fig. 2, a method for scene setting includes:
s201, displaying scene details: the position labels are displayed on the right side of the scene options, at most 4 position labels are displayed on each screen, and redundant position labels can be viewed by sliding left and right; if no position label exists, the position label is not displayed; for example, the "go home" scene displays, on the right side thereof, the location information corresponding to the location tags such as "living room", "bedroom", "bathroom", "kitchen", and the like.
S202, sliding the display position labels left and right: after the number of the position tags exceeds 4, the redundant position tags can be checked by sliding left and right;
s203, displaying a floating layer when adding a scene: the position label supports multi-selection, the selected label is highlighted, and corresponding data are synchronized according to conditions and actions below the selected label; after a user selects the position tags and clicks a button of 'add to my scene', the scene information lists corresponding to the position tags are transversely arranged and expanded in the floating layer, and the scene information lists corresponding to the position tags selected by the user are displayed in a default mode; the user can selectively screen a scene information list which the user wants to generate, a 'number matching' button is arranged at the upper right corner of a card of the scene information list, the user can selectively add or delete the scene information list, and at least 1 scene information list is reserved; the scene information list cards support left-right sliding, and at most 2 scene information list cards are displayed on a single screen. The scene information list is the scene and the position information corresponding to the position label, for example, the scene of "going home", and the position information corresponding to the position label is "bedroom", and then the scene list information is "going home (bedroom)".
S204, completing scene setting: and judging whether the scene addition is successful, if so, changing the button of adding to the My scene into view scene, changing the top position tag into non-clickable, and recovering when the user enters the page again. If the adding is unsuccessful, prompting failure information, if all the failed adding is started, prompting a message prompt box toast, and 'scene starting fails and please retry'; if no network or weak network problem exists, the network problem is prompted uniformly.
Therefore, the user can select the needed scene from the scene information list more conveniently and select the corresponding position tag, one or more scenes can be rapidly added into the user-defined scene library, the interface operation is concise, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved.
Optionally, the method for setting a scene further includes: and displaying the scene and the position label selected by the user.
Optionally, the displaying the scene and the location tag selected by the user includes:
when the number of the position labels reaches a second set condition, namely is less than or equal to a second set threshold value 4, directly displaying the scene and the position labels selected by the user;
and when the number of the position labels reaches a third set condition, namely is greater than a second set threshold value 4, displaying the scene and the position labels selected by the user according to the viewing instruction of the user, and hiding the position labels under the condition that the viewing instruction of the user is not received. Alternatively, when the number of the position tags is greater than 4, for example, there are 6 position tags, only 4 position tags are displayed in the position tag function bar in the horizontal direction, the redundant 2 position tags are hidden, but the "filter" button is displayed on the rightmost side, and according to the viewing instruction of the user, for example, after the user clicks the "filter" button, all the position tags may be displayed in the floating layer, and a selection operation may be performed.
Optionally, the displaying the scene and the location tag selected by the user includes: and displaying the corresponding scene according to the position label selected by the user on the floating layer.
In some embodiments, as shown in conjunction with fig. 3, a method for scene setting includes:
s301, displaying my scene: adding position label selection under the My scene label; there are three display forms according to the position judgment of the position label: when the number of the position labels is 0, hiding the position labels not to be displayed; when the number of the position labels is less than or equal to 4, directly and transversely displaying all the position labels on the position label function bar; when the number of the position labels is more than 4, displaying a 'screening' button on the rightmost side after the position label function bar shows 4 labels transversely and randomly by default; the position label function bar is not hidden along with sliding and always floats on the top. For example, the position information displayed corresponding to the position label is "whole house", "bedroom", "toilet", "kitchen", or the like
S302, scene emerging layer screening: after clicking the 'filter' button, the user can display all the position labels in the floating layer and perform selection operation. If the floating layer is expanded, other areas of the scene information list can not be clicked, clicking has no response, and the floating layer is retracted only by clicking 'determination'; when no position classification is selected, the 'selection clearing' button is a gray font and cannot be clicked; when a certain category or some categories are selected, the 'selection clearing' button is displayed in a black font and can be clicked; and if the selected classification style is changed into an unselected state after clicking 'clear selection', clicking 'clear selection' does not retract the floating layer.
S303, scene setting is completed: the scene position labels support multi-selection, after the position labels are selected, the classified scene information list is directly replaced by clicking one position label, the 'determination' is clicked, the floating layer is folded, the floating layer screening is completed, and meanwhile, the scene setting is completed.
Optionally, when the condition of the filtering is greater than or equal to the third set threshold, for example, 2, a number corner mark may be added in the upper right corner of the "filtering" button, and the user is prompted to count the number of the position tags hidden under the "filtering" button, for example, 7 position tags are displayed, and after 4 position tag function bars are displayed, the number corner mark in the upper right corner of the "filtering" button is 3.
Optionally, if the number of the position labels is less than or equal to the second set threshold of 4, all the position labels are directly and transversely displayed, and the "filtering" button is hidden.
Optionally, if there is only one scene classified by the position tag or there is no scene, the position tag function is hidden.
Therefore, the user can select the scene information more conveniently through the position tag, one or more scenes can be rapidly added into the user-defined scene library, the interface operation is concise, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved.
In practical application, scene recommendation is realized according to position screening condition actions, a position card of a scene is displayed according to user conditions or equipment in the action in a scene recommendation detail page, and the position card is position information displayed on a position label. If 2 positions or more exist, displaying the whole house and other positions of the equipment; if only 1 position is satisfied, for example, only the living room displays "living room"; if the conditions and actions are in different positions, only 'full house' is displayed; and if the scene does not meet the starting condition, the position card is not displayed. Then, when the conditions or actions in the space are met, the space position card is displayed, for example, if the user binds an air conditioner in a living room, an air purifier in a bedroom, and an air conditioner in a bedroom, 3 position cards are displayed in a scene away from home, the "bedroom" of the "whole room" and the "living room" are displayed in the whole room, for example, the air conditioners in the "living room" and the "bedroom" can be checked in the air conditioners, and the purifier in the "bedroom" can be selected; only one device of the air conditioner in the living room is arranged below the position of the living room; there are 2 devices under the "bedroom" space. The delayed action before the action of the device is screened along with the device. The message action is repeatedly displayed under each space. Only the equipment conditions among the conditions can be filtered, and the other conditions are repeatedly displayed in each space. And according to the screening of the position cards, the requirement that related conditions or actions are collected without equipment is met, and scene conditions or actions are optimized without equipment according to details. The location card is in a location between any condition and the scene details are satisfied.
In practical application, scenes are added according to the position cards. Firstly, when a user clicks 'add to my scene', a hook option is arranged beside a position card, if the user clicks one option, an instantiation scene is generated during adding, and all rooms are selected by default. When one room is not selected, the file is "please select which room to add a scene for", and "determine" the button is grayed out. When a certain number of scenes are selected, a scene of ' adding N rooms to you ' scene name ' is displayed, and then the generated scene is provided with the position label of the position card. For example, by checking two "going-to-bed" scenes, namely "living room" and "bedroom", 2 scenes are generated, namely a "going-to-bed" [ living room ] label and a "going-to-bed 1" [ bedroom ] label.
Then, when the checked position card is equal to or larger than 2, when the "add to my scene" (2/3/4.) "displayed on the" add to my scene "button is equal to or larger than 2, several of the cards are checked, for example, 4 cards are checked, and 4 is displayed. If only one position card exists, the user can directly click 'add to my scene', and the cards on the floating layer are not displayed to be directly added successfully.
Starting a manual scene, and changing the prompt of a successful corpus bullet box into: the scene may be triggered by speech saying "scene name" word by word, for example: leaving home.
After successful activation, the button still changes to view the scene, which is located in the list, if multiple are located to the top in the sequence.
If the scene position labels are excessive, the scene position labels can slide left and right, the sliding is not cyclic, and the position names maximally exceed the range, an ellipsis is added.
If the button changes to view the scene after being activated once, the "add to my scene" is changed when the detail page of the scene is entered again.
When a user clicks 'add to my scene', all position cards are selected by default, if a selected position card is canceled, the user clicks to cancel, and then the user selection is recorded under the condition that the user does not exit from the page. And only the exit page enters the next time and then all the exit pages are selected.
As shown in fig. 4, an apparatus for setting a scene according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory)101 storing program instructions. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call program instructions in the memory 101 to perform the method for scene setting of the above-described embodiment.
Further, the program instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for scene setting in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the device for setting the scene, the scene information list can be displayed according to the scene options and the position tags corresponding to the scene options, a user can conveniently select a required scene from the scene information list and select corresponding position tags, one or more scenes can be rapidly added into a user-defined scene library, so that the interface operation is simple, the user is prevented from repeatedly switching the interface in the scene setting process, the flow is saved, and the user experience is improved.
The embodiment of the disclosure provides a mobile phone, which includes the above-mentioned device for setting a scene, and the mobile phone can display a scene information list according to a scene option and a position tag corresponding to the scene option, so that a user can select a required scene from the scene information list and select a corresponding position tag conveniently, and one or more scenes can be added to a user-defined scene library quickly, so that the interface operation is simple, the user is prevented from switching the interface repeatedly in the scene setting process, the flow is saved, and the user experience is improved.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for scene setting.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for scene setting.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (9)

1. A method for scene setting, comprising:
displaying a scene option and a position label corresponding to the scene option;
after a user selects a scene and a position label and submits the scene and the position label, displaying a scene information list;
and after the user selects one or more scenes from the scene information list and confirms the setting, adding the one or more scenes into a scene library defined by the user to complete the scene setting.
2. The method of claim 1, wherein displaying the location tag corresponding to the scene option comprises:
and when the number of the position labels reaches a first set condition, the position labels are slid to display the position labels.
3. The method of claim 2, wherein the scene information list is displayed in a floating layer.
4. The method of claim 3, wherein the scene information list comprises the user-selected scene and its corresponding location information.
5. The method of any of claims 1 to 4, further comprising:
and displaying the scene and the position label selected by the user.
6. The method of claim 5, wherein the presenting the user-selected scene and location tag comprises:
when the number of the position labels reaches a second set condition, directly displaying the scene and the position labels selected by the user;
and when the number of the position labels reaches a third set condition, displaying the scene and the position labels selected by the user according to the viewing instruction of the user, and hiding the position labels under the condition that the viewing instruction of the user is not received.
7. The method of claim 5, wherein the presenting the user-selected scene and location tag comprises:
and displaying the corresponding scene according to the position label selected by the user on the floating layer.
8. An apparatus for scene setting, comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method for scene setting according to any one of claims 1 to 7 when executing the program instructions.
9. A handset, characterized in that it comprises an apparatus for scene setting according to claim 8.
CN201911403191.5A 2019-12-31 2019-12-31 Method and device for setting scene and mobile phone Pending CN111176517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911403191.5A CN111176517A (en) 2019-12-31 2019-12-31 Method and device for setting scene and mobile phone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911403191.5A CN111176517A (en) 2019-12-31 2019-12-31 Method and device for setting scene and mobile phone

Publications (1)

Publication Number Publication Date
CN111176517A true CN111176517A (en) 2020-05-19

Family

ID=70658483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911403191.5A Pending CN111176517A (en) 2019-12-31 2019-12-31 Method and device for setting scene and mobile phone

Country Status (1)

Country Link
CN (1) CN111176517A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112054947A (en) * 2020-08-31 2020-12-08 海信(山东)空调有限公司 Method for controlling indoor environment electric appliance, indoor environment electric appliance and remote control terminal
CN112163125A (en) * 2020-09-22 2021-01-01 海尔优家智能科技(北京)有限公司 Device management method and apparatus, storage medium, and electronic device
CN113488041A (en) * 2021-06-28 2021-10-08 青岛海尔科技有限公司 Method, server and information recognizer for scene recognition
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
WO2023221995A1 (en) * 2022-05-19 2023-11-23 华为技术有限公司 Intelligent device control method and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103616876A (en) * 2013-12-02 2014-03-05 从兴技术有限公司 Intelligent home centralized control device and intelligent home contextual model establishing method
CN104778959A (en) * 2015-03-23 2015-07-15 广东欧珀移动通信有限公司 Control method for play equipment and terminal
US9839089B1 (en) * 2016-08-24 2017-12-05 DXY Technology Co., Limited Control method for smart light
CN107479399A (en) * 2017-09-29 2017-12-15 珠海格力电器股份有限公司 A kind of scene setting method and device of intelligent home device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103616876A (en) * 2013-12-02 2014-03-05 从兴技术有限公司 Intelligent home centralized control device and intelligent home contextual model establishing method
CN104778959A (en) * 2015-03-23 2015-07-15 广东欧珀移动通信有限公司 Control method for play equipment and terminal
US9839089B1 (en) * 2016-08-24 2017-12-05 DXY Technology Co., Limited Control method for smart light
CN107479399A (en) * 2017-09-29 2017-12-15 珠海格力电器股份有限公司 A kind of scene setting method and device of intelligent home device

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112054947A (en) * 2020-08-31 2020-12-08 海信(山东)空调有限公司 Method for controlling indoor environment electric appliance, indoor environment electric appliance and remote control terminal
CN112163125A (en) * 2020-09-22 2021-01-01 海尔优家智能科技(北京)有限公司 Device management method and apparatus, storage medium, and electronic device
CN113488041A (en) * 2021-06-28 2021-10-08 青岛海尔科技有限公司 Method, server and information recognizer for scene recognition
CN114332417A (en) * 2021-12-13 2022-04-12 亮风台(上海)信息科技有限公司 Method, device, storage medium and program product for multi-person scene interaction
CN114332417B (en) * 2021-12-13 2023-07-14 亮风台(上海)信息科技有限公司 Method, equipment, storage medium and program product for interaction of multiple scenes
WO2023221995A1 (en) * 2022-05-19 2023-11-23 华为技术有限公司 Intelligent device control method and electronic device

Similar Documents

Publication Publication Date Title
CN111176517A (en) Method and device for setting scene and mobile phone
CN103399703B (en) The control method of the system bar of subscriber equipment and subscriber equipment
CN105005429B (en) A kind of method and terminal of terminal display picture
CN103713847A (en) System bar control method of user equipment and user equipment
CN105786435A (en) Wallpaper picture display method and device
CN108304112B (en) Data processing method and device
CN108983624A (en) A kind of control method and terminal device of smart home device
CN103442299B (en) A kind of display methods for playing record and electronic equipment
CN107168613B (en) Interface interaction method and cluster terminal
CN103019520A (en) Display method for optional item information of terminal application and terminal
CN106775394B (en) Content revealing method, device and electronic equipment, storage medium
CN108762604A (en) A kind of display methods, device and electronic equipment
CN103324383A (en) Method and device for ranking files or icons
CN102117168A (en) Portable electronic book reading device and method thereof for data processing
CN106657653A (en) Information processing method and device
CN110308845B (en) Interaction method and device for application program control interface
CN108259660A (en) A kind of information cuing method and device, terminal, readable storage medium storing program for executing
CN108769160B (en) Service line recommended method, device and storage medium based on service
US11112938B2 (en) Method and apparatus for filtering object by using pressure
CN110275741B (en) Content display method and electronic equipment
CN107179854A (en) A kind of list display method and device
US20170094500A1 (en) Subscriber identity module card managing method and electronic device
CN106775351A (en) A kind of information processing method and device, electronic equipment
CN104182115A (en) Method and device for carrying out page turning setting in reading application and terminal
CN106155462A (en) A kind of interface alternation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination