CN111162980A - Method and device for scene control and mobile phone - Google Patents

Method and device for scene control and mobile phone Download PDF

Info

Publication number
CN111162980A
CN111162980A CN201911405682.3A CN201911405682A CN111162980A CN 111162980 A CN111162980 A CN 111162980A CN 201911405682 A CN201911405682 A CN 201911405682A CN 111162980 A CN111162980 A CN 111162980A
Authority
CN
China
Prior art keywords
scene
action
position information
equipment
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911405682.3A
Other languages
Chinese (zh)
Inventor
古滔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qingdao Haier Technology Co Ltd
Original Assignee
Qingdao Haier Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qingdao Haier Technology Co Ltd filed Critical Qingdao Haier Technology Co Ltd
Priority to CN201911405682.3A priority Critical patent/CN111162980A/en
Publication of CN111162980A publication Critical patent/CN111162980A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2807Exchanging configuration information on appliance services in a home automation network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Environmental & Geological Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Telephone Function (AREA)

Abstract

The application relates to the technical field of Internet of things, and discloses a method for scene control, which comprises the following steps: setting a device response action corresponding to the scene; when the scene is triggered, controlling the device corresponding to the scene to execute the device response action. According to the method, the device response action corresponding to the scene is set, the device under the scene can be triggered to execute the corresponding action according to the user requirement, the scene template customizing effect is achieved, the user can use the template scene more flexibly, and the user experience is improved. The application also discloses a device and a mobile phone for scene control.

Description

Method and device for scene control and mobile phone
Technical Field
The application relates to the technical field of intelligent terminals, for example, to a method and a device for scene control and a mobile phone.
Background
With the gradual progress of smart home, new household appliance control modes are continuously changed, and at present, the household appliances are controlled by setting scenes more and more generally. The user sets up domestic appliance's operation scene at intelligent terminal to trigger domestic appliances such as air purifier, air conditioner, intelligent audio amplifier, electric water heater, LED lamp and work under the preset condition, make all kinds of domestic appliances independently operate and set for the scene.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art: in the existing scene control method, a user can only select a template scene, and cannot add, delete or modify the equipment response action according to actual requirements, so that the user experience is poor.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method and a device for scene control and a mobile phone, so as to solve the technical problem that the response action of equipment is not set in the existing scene control.
In some embodiments, the method for scene control comprises: setting a device response action corresponding to the scene; when the scene is triggered, controlling the device corresponding to the scene to execute the device response action.
In some embodiments, the apparatus for scene control comprises a processor and a memory storing program instructions, the processor being configured to perform the above-described method for scene control when executing the program instructions.
In some embodiments, the mobile phone comprises the above-mentioned apparatus for scene control.
The method, the device and the mobile phone for scene control provided by the embodiment of the disclosure can achieve the following technical effects: through setting the equipment response action corresponding to the scene, equipment under the scene can be triggered to execute the corresponding action according to the user requirement, the scene template customizing effect is achieved, the user can use the template scene more flexibly, and the user experience is improved.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
FIG. 1 is a schematic diagram of a method for scene control provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an apparatus for scene control according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
With reference to fig. 1, an embodiment of the present disclosure provides a method for scene control, including:
s101, setting equipment response actions corresponding to scenes;
and S102, when the scene is triggered, controlling the equipment corresponding to the scene to execute the equipment response action.
By adopting the method for controlling the scene, the device response action corresponding to the scene can be set, the device under the scene can be triggered to execute the corresponding action according to the user requirement, the effect of customizing the scene template is achieved, the user can use the template scene more flexibly, and the user experience is improved.
Optionally, setting a device response action corresponding to the scene includes: and modifying, adding or deleting the equipment response action corresponding to the scene, and updating the equipment response action corresponding to the scene in the scene library. Therefore, the response action of the equipment can be updated more conveniently.
Optionally, when there are more than two device response actions corresponding to the scene, further comprising sorting the device response actions; and when the scene is triggered, controlling the equipment corresponding to the scene to execute equipment response actions according to a set sequence. For example, the device response action corresponding to the scene is to turn on the air conditioner and turn on the air purifier in the living room of the scene, and when the scene is triggered, the air conditioner is set to be turned on first and then the air purifier is set to be turned on.
Optionally, the device is set on the float-out layer in response to the action.
Optionally, the method for scene control further comprises: setting a trigger condition of a scene; upon reaching the trigger condition, the scenario is triggered.
Optionally, the trigger condition comprises one or more of: set time, set environment, enter geofence, exit geofence. For example, at 19:00 pm, the device air conditioner corresponding to the living room in the home scene is triggered to execute the device response action, such as turning on the air conditioner.
Optionally, the geo-fence is obtained by: acquiring position information; and setting the area within the set length of the distance and position information as a geo-fence. Optionally, the set length is 100 meters. For example, the location information is the location of a home, and the location of the home is within 100 meters of the geofence. For example, when the user arrives at the home within 100 meters, the device air conditioner corresponding to the living room of the home scene is triggered to be started, and if the device air purifier corresponding to the living room of the home scene is also triggered to be started, the air conditioner is started first, and then the air purifier is started.
Optionally, the obtaining the location information includes: obtaining location information by searching for an address; or, fuzzy search is carried out through keywords, and position information is selected from matched addresses; or, acquiring GPS positioning information, and acquiring position information according to the GPS positioning information; or, obtaining the position information through the map interaction information. Therefore, the position information can be more conveniently and accurately obtained.
In practical application, the manual template execution process includes: entering a template scene detail page in a scene store; entering a template scene editing page after the scene is successfully started; the 'editing mode' or 'deleting condition' is selected by long press, wherein the flow of deleting the condition or the flow of adding the new condition is consistent with the template operation; and if the manual execution condition is deleted, saving the manual execution condition. And then, adding a new condition, namely adding a new scene trigger condition, clicking 'adding a new condition', enabling the added condition to emerge, adding a new condition through the emerging layer, selecting a trigger condition, entering template scene editing, then storing, clicking 'adding a new action', and adding a new action through the emerging layer, namely a device response action corresponding to the scene.
Optionally, the scenario enabling success includes: after the template of the manual scene is started, the adding condition button of the manual condition is gray-set and can not be clicked, and other conditions can not be added; the logic of 'if a plurality of devices meet the condition, the devices can select one from several' is reserved, but if the user deletes the condition, only one device can be added according to the function of the custom adding condition; and when multiple devices are used, the user clicks the action characters to adjust the parameters.
Optionally, adding the new condition comprises: classification of manual and automatic scenes, further comprising: additions to set time, set environment, enter geofence, exit geofence, device, external environment, and the like.
Optionally, the adding a new action comprises: the device response actions corresponding to the scene can be added, deleted and sequenced, and delay, message and device functions can be added during the addition; when the device responds to the action addition, the logic of the scene customization is consistent, and the functions of binding the device action, the delay and the message can be added; the logic of 'if there are several devices conforming to the response action of the device, the device can be selected more' is reserved, if the user deletes the action, the user can only add the device according to the function and device of the custom adding device.
The manual execution of the custom flow comprises the following steps: manually triggering custom scene editing by electric shock on a home page, enabling an electric shock 'adding new action' to appear on a floating layer, and adding a corresponding device response action of a scene on the floating layer.
In some embodiments, the template for the manual scenario is enabled, and the "add condition" button for the manual condition is hidden from adding other conditions.
The conditions in the manual scene can be deleted, and after deletion, a button of 'adding new conditions' is displayed; clicking the scene condition selection box popped up at the bottom of the 'add new condition' page, and the method comprises the following steps of classifying manual scenes and automatic scenes: additions to set time, set environment, enter geofence, exit geofence, device, external environment, and the like. The scene corresponding to the blank condition adds the condition for the first time.
In the automatic scene, the conditions, namely the trigger conditions and the actions, namely the adding conditions and the action buttons can be directly displayed in the response actions of the corresponding equipment of the scene. Conditions require repeated combinations between classes. Timing, external environment, device triggering, geo-fencing functions in any combination. If the function conflicts with the existing condition, clicking the graying function can prompt that the function has influence on the selected function and cannot be selected.
Where multiple geofence conditions can be added, two or more, the condition logic must be arbitrarily satisfied; if the initial logic is that all conditions are met: when a second geo-fence is added, and when timing and sunrise and sunset are added, a gray small lock placing mode is displayed, and the unified prompt of clicking a small lock meets the condition that the second geo-fence cannot be added and the timing and sunrise and sunset cannot be added under the condition that one geo-fence exists under all condition logics; when any of the initial logic is satisfied: when a second or more geofences can be added, but the logic of the scenario cannot be changed, the logic can only be modified after deleting one geofence.
And if the user changes the automatic scene condition into manual, deleting and hiding the effective time. The effective time is only displayed when the auto scene is conditional.
When 2 sunrise and sunset conditions can be added at most, namely only one sunrise and one sunset are in a scene, the condition logic must be arbitrarily satisfied when two conditions exist: if the initial logic is that all conditions are met: when a second sunrise and sunset is added and a timing geofence is added, a gray small lock placing mode is displayed, and the click of a small lock is given as a unified prompt, namely, the condition that the sunrise and sunset exist under all condition logics is met, the second sunrise and sunset cannot be added, and the timing geofence cannot be added; when any of the initial logic is satisfied: when a second sunrise and sunset can be added, but the logic of the scene cannot be changed, and the logic can be modified only after one sunrise and sunset is deleted.
As long as the scene with the condition of the geo-fence only acts on a scene creator, family members only can see the scene and cannot click to enter the configuration, when the scene is clicked out or switched on, a message prompt box toast is popped up, the scene only can be edited by the creator and cannot be modified, the personal scene of other people is grayed when the scene is edited for a long time, and the scene only can be edited by the creator and cannot be modified by the creator and cannot be deleted and renamed by clicking a prompt.
The same external conditions, cannot create 2 and more than 2 identical "above" or "below" conditions, such as outdoor humidity: when a condition higher than that is created later and a second condition is created, the condition higher than that is hidden is only selected lower; if the logic of the condition is configured, for example, the temperature is higher than or lower than the selected temperature, the condition displays a grey small lock mode, and a unified prompt of the small lock is clicked.
Actions for hiding the device-less configuration after the scene is enabled include: an act of device configuration when enabled by a user; the action to be configured is displayed, i.e., the user has a device to support the action. If the device is hidden, the delay action before the action of the device is also hidden and the delay is not executed, but the message action is not hidden.
If the user deletes an action, the action will not be displayed permanently, for example, the action of turning on the air conditioner. If the user does not have the air conditioner at the beginning, the action is not displayed after the scene is started; if a user has a certain action in the original template but shares the action equipment, the action is not hidden, the action diagram still retains the class diagram in the equipment, but the display is changed into 'equipment moving out', but due to the design of an engine, after the user enters and stores for the second time, if no available equipment still exists, the next time after the storage, the action diagram is hidden, and if the equipment is shared back, the equipment cannot be automatically configured, and only belt configuration can be prompted; if the picture of the equipment can not be obtained after the user-defined sharing, the picture is changed into the picture with the missing equipment, and the file is also changed into the action hiding when the equipment moves out of the scene and enters the page next time; if all the devices are moved out under a certain action or condition in a scene in the user definition, prompting a user to delete the condition or action without the devices when the point is determined to be stored, and then storing the scene. "the later user binds an air conditioner, the action displays the configuration to be configured; if an action is displayed, it is deleted by the user who will not display the action later, whether with or without the device
The default order of actions for scene hiding or adding includes: when the scene template is started, if equipment exists, the template action sequence is arranged according to the template sequence; the user arranges the new actions in sequence according to the new time; if the user has an action but is in a state to be configured, the actions are arranged at the lowest part of the action according to the sequence of the scene templates but are consistent, the configuration is finished and the details page is updated next time. For example, 1, 2 (initial configuration), adding custom a, B, and waiting to configure 3, 4, 5; 4, when the detail page is re-entered after the configuration, the sequence is 1, 2, A, B, 4, 3 and 5.
Therefore, through automatic free combination of scene conditions, customization of scene templates and mutual exclusion relationship among the conditions, a user can use the template scene more flexibly.
As shown in fig. 2, an apparatus for scene control according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory)101 storing program instructions. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call program instructions in the memory 101 to perform the method for scene control of the above-described embodiment.
Further, the program instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the method for scene control in the above-described embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the device for controlling the scene, which is provided by the embodiment of the disclosure, the device response action corresponding to the scene can be set, the device under the scene can be triggered to execute the corresponding action according to the user requirement, the effect of customizing the scene template is achieved, the user can use the template scene more flexibly, and the user experience is improved.
The embodiment of the disclosure provides a mobile phone, which comprises the device for scene control. The mobile phone can trigger equipment under a scene to execute corresponding actions according to user requirements by setting equipment response actions corresponding to the scene, so that a scene template customizing effect is achieved, a user can use a template scene more flexibly, and the user experience is improved.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for scene control.
Embodiments of the present disclosure provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method for … described above.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, where the computer software product is stored in a storage medium and includes one or more instructions to enable a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A method for scene control, comprising:
setting a device response action corresponding to the scene;
when the scene is triggered, controlling the device corresponding to the scene to execute the device response action.
2. The method of claim 1, wherein setting the device response action corresponding to the scene comprises:
and modifying, adding or deleting the equipment response action corresponding to the scene, and updating the equipment response action corresponding to the scene in the scene library.
3. The method of claim 2, wherein when there are more than two device response actions corresponding to a scene, further comprising ordering the device response actions;
and when the scene is triggered, controlling the equipment corresponding to the scene to execute the equipment response actions according to a set sequence.
4. The method of claim 3, wherein the device is set on a float level in response to the action.
5. The method of any of claims 1 to 4, further comprising:
setting a trigger condition of a scene; a scene is triggered.
6. The method of claim 5, wherein the trigger condition comprises one or more of:
set time, set environment, enter geofence, exit geofence.
7. The method of claim 6, wherein the geo-fence is obtained by:
acquiring position information;
and setting an area within a set length from the position information as a geo-fence.
8. The method of claim 7, wherein the obtaining the location information comprises:
obtaining the position information by searching an address; or the like, or, alternatively,
fuzzy search is carried out through keywords, and the position information is selected from the matched addresses; or the like, or, alternatively,
acquiring GPS positioning information, and acquiring the position information according to the GPS positioning information; or the like, or, alternatively,
and obtaining the position information through map interaction information.
9. An apparatus for scene control, comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method for scene control according to any one of claims 1 to 8 when executing the program instructions.
10. A handset, characterized in that it comprises an apparatus for scene control according to claim 9.
CN201911405682.3A 2019-12-31 2019-12-31 Method and device for scene control and mobile phone Pending CN111162980A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911405682.3A CN111162980A (en) 2019-12-31 2019-12-31 Method and device for scene control and mobile phone

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911405682.3A CN111162980A (en) 2019-12-31 2019-12-31 Method and device for scene control and mobile phone

Publications (1)

Publication Number Publication Date
CN111162980A true CN111162980A (en) 2020-05-15

Family

ID=70559857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911405682.3A Pending CN111162980A (en) 2019-12-31 2019-12-31 Method and device for scene control and mobile phone

Country Status (1)

Country Link
CN (1) CN111162980A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167736A (en) * 2020-10-12 2022-03-11 超级智慧家(上海)物联网科技有限公司 Intelligent household scene generation method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103533501A (en) * 2013-10-15 2014-01-22 厦门雅迅网络股份有限公司 Geofence generating method
US20150082225A1 (en) * 2013-09-18 2015-03-19 Vivint, Inc. Systems and methods for home automation scene control
CN106454898A (en) * 2016-10-20 2017-02-22 北京小米移动软件有限公司 Intelligent scene configuration method and device
CN107852571A (en) * 2015-05-21 2018-03-27 克劳德泰克有限责任公司 Identification, positioning and Verification System and method
CN108282388A (en) * 2017-12-31 2018-07-13 普天智能照明研究院有限公司 For the apparatus and method to home equipment transfer data information
CN108614689A (en) * 2017-01-09 2018-10-02 阿里巴巴集团控股有限公司 Generation method, device and the terminal device of scene service
CN108848022A (en) * 2018-06-05 2018-11-20 华南理工大学 A kind of information push method based on scene and user behavior
CN109033128A (en) * 2018-06-01 2018-12-18 口口相传(北京)网络技术有限公司 A kind of geographic position identification method and device
CN109067915A (en) * 2018-09-20 2018-12-21 北京创鑫旅程网络技术有限公司 The methods, devices and systems of location based service are provided
CN109168131A (en) * 2018-10-09 2019-01-08 好活(昆山)网络科技有限公司 A kind of geography fence creation method used for internet platform and system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150082225A1 (en) * 2013-09-18 2015-03-19 Vivint, Inc. Systems and methods for home automation scene control
CN103533501A (en) * 2013-10-15 2014-01-22 厦门雅迅网络股份有限公司 Geofence generating method
CN107852571A (en) * 2015-05-21 2018-03-27 克劳德泰克有限责任公司 Identification, positioning and Verification System and method
CN106454898A (en) * 2016-10-20 2017-02-22 北京小米移动软件有限公司 Intelligent scene configuration method and device
CN108614689A (en) * 2017-01-09 2018-10-02 阿里巴巴集团控股有限公司 Generation method, device and the terminal device of scene service
CN108282388A (en) * 2017-12-31 2018-07-13 普天智能照明研究院有限公司 For the apparatus and method to home equipment transfer data information
CN109033128A (en) * 2018-06-01 2018-12-18 口口相传(北京)网络技术有限公司 A kind of geographic position identification method and device
CN108848022A (en) * 2018-06-05 2018-11-20 华南理工大学 A kind of information push method based on scene and user behavior
CN109067915A (en) * 2018-09-20 2018-12-21 北京创鑫旅程网络技术有限公司 The methods, devices and systems of location based service are provided
CN109168131A (en) * 2018-10-09 2019-01-08 好活(昆山)网络科技有限公司 A kind of geography fence creation method used for internet platform and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方维岚等: "基于LBS的智慧园区导览交互系统设计与实现", 《信息技术》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114167736A (en) * 2020-10-12 2022-03-11 超级智慧家(上海)物联网科技有限公司 Intelligent household scene generation method and device
CN114167736B (en) * 2020-10-12 2024-04-19 超级智慧家(上海)物联网科技有限公司 Smart home scene generation method and device

Similar Documents

Publication Publication Date Title
CN113412457A (en) Scene pushing method, device and system, electronic equipment and storage medium
CN110687811B (en) Method and device for scene configuration of smart home offline voice equipment
CN109829106B (en) Automatic recommendation method and device, electronic equipment and storage medium
CN105635063B (en) Internet of Things communication protocol configuration method and device
CN105453596A (en) Intelligent SIM selection supporting rich context of input factors
CN109271078A (en) Content share method, terminal device and storage medium
KR20140143725A (en) Image correlation method and electronic device therof
CN102789317A (en) Method and device for accelerating text input
CN109997111A (en) Contents processing across application
CN105824863B (en) Desktop theme recommendation method and terminal
CN113448468A (en) Electronic device and method for processing information executed by electronic device
WO2015184736A1 (en) Method and terminal for transforming background picture of touchscreen device
CN113111186A (en) Method for controlling household appliance, storage medium and electronic device
CN108039989A (en) A kind of method, apparatus of Scene case, storage medium and computer equipment
CN107423635A (en) Application sharing method and device and user terminal
CN103365550A (en) User information setting method and device and client device
CN111158254A (en) Method and device for starting scene and mobile phone
CN109240098A (en) Equipment configuration method, device, terminal device and storage medium
CN111162980A (en) Method and device for scene control and mobile phone
CN110794773A (en) Click-type scene creating method and device
WO2020084614A1 (en) Camera system and method for efficient capture and distribution of images
CN114091422A (en) Display page generation method, device, equipment and medium for exhibition
CN109783144B (en) Method and device for processing variable in interactive realization of virtual environment and storage medium
US20170094500A1 (en) Subscriber identity module card managing method and electronic device
CN104135560A (en) Method and device of adding temporary contact information as well as terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200515