CN112034726A - Scene-based control method, device, equipment and storage medium - Google Patents

Scene-based control method, device, equipment and storage medium Download PDF

Info

Publication number
CN112034726A
CN112034726A CN202010970095.5A CN202010970095A CN112034726A CN 112034726 A CN112034726 A CN 112034726A CN 202010970095 A CN202010970095 A CN 202010970095A CN 112034726 A CN112034726 A CN 112034726A
Authority
CN
China
Prior art keywords
content
control
information
scene information
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010970095.5A
Other languages
Chinese (zh)
Inventor
王明
朱锟璐
谷怡良
刘依娜
吕嘉佳
马亚伟
孙荣苑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Shanghai Xiaodu Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010970095.5A priority Critical patent/CN112034726A/en
Publication of CN112034726A publication Critical patent/CN112034726A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manufacturing & Machinery (AREA)
  • Quality & Reliability (AREA)
  • Selective Calling Equipment (AREA)

Abstract

The application discloses a scene-based control method, a scene-based control device, scene-based control equipment and a scene-based storage medium, and relates to the fields of Internet of things, artificial intelligence and intelligent home. The specific implementation scheme is as follows: acquiring associated control information of scene information, wherein the associated control information comprises content to be played and equipment to be controlled; playing the content and controlling the device to perform the required actions. According to the embodiment of the application, the linkage of equipment control and content playing can be realized, serial execution is not needed, the execution speed is high, the waiting time is short, the requirement for enriching scenes is met, and the user experience is good.

Description

Scene-based control method, device, equipment and storage medium
Technical Field
The application relates to the field of internet, in particular to the fields of internet of things, artificial intelligence and smart home.
Background
According to the current related product design of the intelligent home, a plurality of intelligent household appliances can be triggered to jointly execute a series of actions, but audio and video resources and the function design of parallel execution or alternate execution of the intelligent home equipment are lacked. With the development of the AI (Artificial Intelligence) technology, the smart speaker gradually becomes a control assistant for the user at the home equipment, and different execution scenes are designed for the user based on the control of audio resources, video resources and IoT (Internet of Things) equipment in combination with the requirements of the user at different times and different scenes, so as to satisfy the needs of the user in home life in multiple directions.
Disclosure of Invention
The application provides a scene-based control method, a scene-based control device, scene-based control equipment and a storage medium.
According to an aspect of the present application, there is provided a scene-based control method, including:
acquiring associated control information of scene information, wherein the associated control information comprises content to be played and equipment to be controlled;
playing the content and controlling the device to perform the required actions.
According to another aspect of the present application, there is provided a scene-based control apparatus including:
the device comprises an acquisition unit, a processing unit and a control unit, wherein the acquisition unit is used for acquiring associated control information of scene information, and the associated control information comprises content to be played and equipment to be controlled;
a control unit for playing the content and controlling the device to perform the required actions.
According to the embodiment of the application, the linkage of equipment control and content playing can be realized, serial execution is not needed, the execution speed is high, the waiting time is short, the requirement for enriching scenes is met, and the user experience is good.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
fig. 1 is a flowchart illustrating a scene-based control method according to an embodiment of the present application.
Fig. 2 is a flowchart illustrating a scene-based control method according to another embodiment of the present application.
Fig. 3 is a schematic diagram of a triggering process.
Fig. 4 is a schematic diagram of an application example.
Fig. 5 is a block diagram of a scene-based control device according to an embodiment of the present application.
Fig. 6 is a block diagram of a scene-based control apparatus according to another embodiment of the present application.
Fig. 7 is a block diagram of an electronic device of a scene-based control method according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a flowchart illustrating a scene-based control method according to an embodiment of the present application. The method can comprise the following steps:
and S11, acquiring the associated control information of the scene information, wherein the associated control information comprises the content to be played and the equipment to be controlled.
S12, playing the content and controlling the device to perform the required actions.
In the embodiment of the present application, the content may also be referred to as content resources, including content resources such as information, video, entertainment, and the like, for example: current weather, recent news information, user's favorite music, audio books, movie resources, etc. The scenarios of the embodiments of the present application may include a physical environment, a particular time, or a combination of a physical environment and a particular time, etc.
The content to be played in the associated control information of a certain scene information may include one kind of content or a plurality of kinds of content. Different types of content may be played through different applications. For example, news information is played through a news application, content of weather forecast is played through a weather application, audio and video content is played through a multimedia application, and environmental temperature and humidity are played through an environment detection application. The user can set the associated control information of the scene information according to the requirement of the user. Part of the information in the association control information may also be automatically set according to the preference of the user. For example, the content to be played may include news topics of interest to the user, songs enjoyed to listen to, weather of a city recently attended to, and the like.
In the embodiments of the present application, the device may include various hardware devices. Controlling a device to perform a desired action (which may also be referred to as an operation) may include controlling a hardware device, such as a home appliance, to perform some basic operation, such as: open air conditioner, close television, open air purifier, open window, etc.
The device to be controlled in the associated control information of certain scenario information may include one device or a plurality of devices. The connection mode between the control terminal, for example, a small sound box, a mobile phone and other intelligent terminals and the controlled device may include multiple modes, for example, bluetooth, infrared, a local area network (for example, a television screen), a cloud and the like. Corresponding to different devices, the control end can control according to the execution sequence and execution action required by the scene information. For example, the user speaks the voice "i go home" to identify that the scene of going home is met, and the control end can automatically turn on a lamp, an air purifier, an air conditioner and a television at home and play the news of the present day and the like for the user at the same time. For another example, when the air quality PM2.5 is greater than 100, the control end may automatically close the window, open the air purifier, and play the latest weather conditions in accordance with the air purification scenario.
According to the embodiment of the application, the linkage of equipment control and content playing can be realized, serial execution is not needed, the execution speed is high, the waiting time is short, the requirement for enriching scenes is met, and the user experience is good.
Fig. 2 is a flowchart of a scene-based control method according to another embodiment of the present application. The scene-based control method of this embodiment may include the steps of the above-described embodiments. In this embodiment, S12 of the method may include: during the playing of the content, control instructions are sent in parallel to at least one of the devices. Thus, the equipment can be controlled to execute required actions in parallel, the processing speed is improved, and the waiting time is reduced.
In one embodiment, playing the content includes:
and S21, playing the content through the corresponding process of the content. Illustratively, the content can be played through a process corresponding to the thread triggering content, and the equipment is controlled to execute the required action by triggering the corresponding process through another thread, so that the linkage and the parallel of the played content and the control equipment can be realized, the processing speed is improved, and the waiting time is reduced.
In one embodiment, sending control instructions to at least one of the devices comprises:
and S31, sending a control instruction to at least one device through a process corresponding to the Internet of things (IoT) function.
In the embodiment of the present application, S21 and S31 may be executed in parallel. For example, control instructions are sent to one or more devices during the course of playing the content. For another example, a control command is sent to a device, the control command instructs the device to perform a certain action with a delay, and then a certain content is played, and then the control command is sent to one or more devices. Therefore, the effect of parallel control can be achieved, and the control efficiency of the equipment and the user experience can be improved.
Illustratively, the control end can establish connection with other external devices through the function of the internet of things. The connection mode between the control terminal and other external devices may include various modes, such as bluetooth, infrared, local area network, cloud, and the like. The control end can control a plurality of external devices to operate in parallel. For example, after the control end triggers a process corresponding to the internet of things function, a plurality of threads can be triggered through the process, and the plurality of threads respectively control a plurality of external devices to operate. Thus, the equipment can be controlled to execute the required actions in parallel, and the equipment does not need to be controlled to execute after the equipment is executed in series.
In one embodiment, the sending of the control instruction to at least one of the devices through a corresponding process of the internet of things includes at least one of:
triggering a first thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through Bluetooth;
triggering a second thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through infrared;
triggering a third thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through a local area network;
and triggering a fourth thread through a process corresponding to the function of the Internet of things, and sending a control instruction to the equipment connected through the cloud end.
In the embodiment of the present application, different pieces of context information may trigger different devices to perform different actions. As shown in fig. 3, after matching certain scene information, a process P0 corresponding to a scene may be triggered, if the associated control information of the scene information includes both content to be played and equipment to be controlled, the process P0 may trigger the process P11 corresponding to the content to be played through one thread to play the content, and if there are multiple contents to be played, for example, music and news are two contents to be played, multiple processes may be sequentially triggered, for example, after triggering the process P11 to play music, and after playing is completed, the process P12 may be triggered to play news information. In addition, the process P0 may trigger the process P2 corresponding to the internet of things function through another thread to control the operation of the external device. The process corresponding to the function of the internet of things can control the running of a plurality of external devices by a plurality of concurrent threads.
For example, in a night scene, 5 lights need to be turned off and sleep-aid music S played, of which 2 are lights L1 and L2 connected through bluetooth, 2 are lights L3 and L4 connected through infrared, and 1 is a light L5 of a third party connected through a cloud. The control end can turn on the multimedia playing application generating process P11 of the control end to play the music S, and in the process, the control end can also trigger a plurality of threads to turn off the lamps through the process P2 corresponding to the function of the Internet of things. For example, lights L1 and L2 are controlled to turn off by thread 1 and thread 2, respectively, lights L3 and L4 are controlled to turn off by thread 3 and thread 4, respectively, and an instruction to control L5 to turn off is sent to the cloud of the control end by thread 5. The cloud of the control end can forward the command for controlling the closing of the L5 to the cloud of the L5, and the cloud of the L5 sends the command to the L5 so as to control the closing of the L5.
For example, in a home scene, it is necessary to turn on lights L1 and L2 connected via bluetooth, turn on an air cleaner connected via a cloud, and play news information. The control end can open own news application generation process P12 to play news information, and in the process, the control end can also trigger a plurality of thread lamps L1 and L2 through the process P2 corresponding to the function of the Internet of things and open the air purifier C. For example, the lights L1 and L2 are controlled to be turned on by thread 1 and thread 2, respectively, and an instruction for controlling the cleaner C to be turned on is sent to the cloud end of the control end by thread 3. The cloud of control end can forward the instruction that control clarifier C opened to the cloud of C, and the cloud of C sends this instruction to C to control C and open.
Through the process corresponding to the function of the Internet of things, the multiple devices can be controlled to execute the required actions concurrently, and the devices do not need to be controlled to execute after one device finishes executing in series, so that the response speed is improved.
In one embodiment, the method further comprises:
and presetting associated control information of each scene information, wherein the associated control information comprises content needing to be played, equipment needing to be controlled, an execution sequence and an execution action.
For example, the associated control information of a plurality of kinds of scene information may be set in advance. In practical application, the user is supported to select and/or configure the associated control information of various scene information at the control end. The associated control information may include not only the content to be played and the device to be controlled, but also the execution sequence and the execution action. For example, content C1 is played first, content C2 is played again, and devices D1, D2 and D3 are closed at the same opening time as playing C1. For another example, device D1 was turned on and content C1 was playing. For another example, the device D1 was turned on after timing 1 minute, and the content C2 was played.
For example, in a home-returning scene, a light L1 turning on a living room in a home, a light L2 for a main bed, an air conditioner, an air cleaner, and a current weather situation may be set in advance. For example, in a night-time scene, it may be preset to turn off all lights at home, turn off a curtain, turn on an air purifier, adjust the air conditioner temperature to a set value, turn off the air conditioner at a fixed time, etc., and it may also be preset to play a relaxing music, tell a story, etc.
By presetting the associated control information of each scene information, the operation difficulty of the first setting of the user can be reduced, the use threshold of the user is reduced, and the method is more convenient and quicker.
In one embodiment, the method further comprises:
and storing the associated control information of each scene information to the cloud.
For example, the preset associated control information of each scene information may be saved to the cloud of the control end. The cloud may also be referred to as a cloud, a cloud platform, a cloud service platform, and the like. And under the condition that the control end is matched with the scene information needing to be triggered, the scene information related control information can be acquired from the cloud end. For example, if a late-security scene needs to be triggered, the control end may send an identifier indicating the late-security scene and an equipment identifier of the control end to the cloud end, find, at the cloud end, each piece of scene information supported by the equipment according to the equipment identifier, and find associated control information of the late-security scene from the pieces of scene information. For example, the searched associated control information includes lights L1, L2, L3, L4, and L5 being turned off, an air conditioner K1 being turned on, a curtain W1 being closed, and a song S being played. The cloud end can return the relevant control information to the control end, play the song S at the control end, and control the lamps L1, L2, L3, L4 and L5, the air conditioner K1 and the curtain W1.
In this embodiment, the associated control information of each scene information is stored in the cloud, which is beneficial to unified management and integration of various types of devices and is suitable for various scenes. For example, the cloud terminal is beneficial to supporting the integration of intelligent scenes of multiple manufacturers, and can support more types of equipment, so that the overall use experience is created for users.
In one embodiment, acquiring control information associated with scene information includes:
sending the scene information to a cloud;
and receiving the associated control information of the scene information returned by the cloud.
In the embodiment of the application, if the associated control information of each piece of scene information is stored in the cloud, the control end can send the scene information and the identification information of the device to the cloud after determining the scene information to be triggered. The cloud end can search a set of various scene information corresponding to the equipment according to the identification information of the equipment, and further search the associated control information of the scene information which needs to be triggered by the equipment. Therefore, the method is beneficial to supporting various scenes, has good expandability and is beneficial to meeting various requirements of users.
In one embodiment, the method further comprises at least one of:
recognizing the detected voice signal to obtain matched scene information;
detecting the temperature and/or humidity of the environment to obtain matched scene information;
identifying the human body signal to obtain matched scene information;
identifying the illumination intensity of the environment to obtain matched scene information;
detecting the pollution degree of the environment to obtain matched scene information;
and detecting and identifying any combination of voice signals, ambient temperature, ambient humidity, human body signals, ambient light intensity, ambient carbon dioxide concentration and ambient harmful gas concentration to obtain matched scene information. Specifically, detection and identification can be performed according to any combination designed by the user, so that matched scene information is obtained.
Illustratively, the context information may have a corresponding trigger. Such as voice triggers, temperature triggers, pollution triggers, body-sensitive triggers, light intensity triggers, etc.
For example, in a voice triggering mode, the control end may detect a voice signal through a microphone, and perform processing such as voice recognition and semantic analysis on the detected voice signal to obtain matched scene information. For example, the user utters a voice of "i have come home", and after recognition and analysis, the obtained matching scene information may be a scene of coming home. For another example, the user sends out voices such as "i want to sleep", "night" and the like, and after recognition and analysis, the obtained matched scene information can be a night scene.
For example, in the temperature triggered manner, if it is detected that the ambient temperature is lower than the set threshold, the triggered scene information may be "low temperature". If the detected ambient temperature is higher than the set threshold, the triggered scene information may be "high temperature".
For example, in the pollution level triggering mode, if the pollutant of the environment, such as PM2.5, is detected to be greater than the set threshold, the triggered scene information may be "air purification".
Through various triggering modes, the method is favorable for flexibly meeting the user requirements of various scenes, and provides a more flexible and intelligent control mode.
In an application example, a system for linkage control of hardware devices and content resources in a smart home is provided, as shown in fig. 4, the system mainly includes the following contents:
1. modular, parallel execution
The control end, such as a mini sound box, a mobile phone and other intelligent terminals, can distinguish and integrate the hardware control and the content resources, and can execute the hardware control and play the content resources in parallel through different modules. For example, after the user sets the combining operation, the hardware control is performed by a module, such as an IoT module. Specifically, for example, a process through the IoT module may concurrently have multiple threads to perform multiple hardware operations in parallel, such as performing hardware operation 1, performing hardware operation 2 … … performing hardware operation n. In addition, the content asset may be played through another module. The module can trigger the corresponding process to play the content resource according to the type of the content resource. If there are multiple content assets that need to be played, such as content asset 1, content asset 2 … …, content asset n, these content assets can be played in sequence. Specifically, for example, a process corresponding to a weather forecast may be triggered to broadcast current weather, a process corresponding to news information may be triggered to broadcast news, and a process corresponding to music may be triggered to broadcast a song that is interested by a user.
The information of various hardware devices which can be controlled by the control end, such as various intelligent household devices, can be stored in the cloud platform of the control end, and the control end can also obtain specific contents to be played through the cloud platform according to user requirements.
2. Intelligent construction of content resources
The user can select an intelligent scene template, and the template can provide relatively fixed contents for the user and can also provide intelligently constructed content combinations in real time according to the preference, date, time and current conditions of the user. For example: and playing the music which the user likes to listen to according to the preference of the user. Playing the real-time news of interest to the user. And providing suggestions for dressing and traveling for the user according to the weather condition of the day. And playing the historical events related to the current day.
3. Integrated third party scenarios
When a user sets a scene operation on the third-party cloud platform, the operation of the third-party cloud platform can be integrated through the scheme of the embodiment of the application. Multiple third party scenarios may be executed in parallel. For example, referring to FIG. 4, third party scenario 1 may be executed concurrently with multiple threads, third party scenario 2 being executed 2 … … executing third party scenario n, respectively.
For example, if the home appliances of the user are from multiple manufacturers, different manufacturers have their own device cloud platforms, and the smart scene set on these cloud platforms can control the home appliances of the manufacturers themselves. When a user desires to execute an intelligent scenario through an instruction or a trigger, the intelligent scenarios of the respective home appliance manufacturers may be integrated through an integration system. In the system, the cloud platform of the control end can be connected with the cloud platforms of various household appliance manufacturers, the control end sends a control command to the cloud platforms of the household appliance manufacturers through the cloud platforms of the control end, and the cloud platforms of the household appliance manufacturers send the control command to the connected equipment.
According to the embodiment of the application, the intelligent scenes of a plurality of third-party platforms can be integrated through the multithreading technology, and an integral intelligent home scene is created for a user. And when the integral intelligent scene is generated, a third-party intelligent scene similar to the integral intelligent scene can be intelligently selected and defaulted to be selected by the user, so that the operation cost of the user is reduced.
4. Preset intelligent operation
In the embodiment of the application, an intelligent scene can be pre-built in the control end, and part of equipment operation can be preset for a user. For example: in a scene away from home, all lamps can be preset to be turned off, the sweeping robot is turned on, the curtain is turned off, and the security system is turned on. In the night security scene, all lamps can be closed, curtains can be closed, the air conditioner can be closed at fixed time, and the air purifier can be opened. When the user selects the scenes, the operation of manually selecting and setting the equipment can be reduced, and the operation cost of the user is reduced.
The method and the device for processing the content resources can achieve linkage of operation of the hardware device and the content resources. And the operation of a plurality of devices can be in parallel without serial execution, so that the execution speed is high and the user experience is good. In addition, the content resources can be intelligently pushed according to the characteristics, time and the like of the user. Furthermore, the intelligent scenes of multiple manufacturers can be integrated through the cloud platform, and the overall use experience is created for the user. Furthermore, through the preset function, the operation difficulty of the first setting of the user can be reduced, and the use threshold of the user is reduced.
Fig. 5 is a block diagram of a scene-based control device according to an embodiment of the present application. The apparatus may include:
an obtaining module 51, configured to obtain associated control information of the scene information, where the associated control information includes content to be played and a device to be controlled;
a control module 52 for playing the content and controlling the device to perform the required actions.
In one embodiment, the control module 52 is specifically configured to send control commands to at least one of the devices in parallel during the playing of the content.
In one embodiment, as shown in FIG. 6, the control module 52 includes:
and the content playing sub-module 521 is configured to play the content through a process corresponding to the content.
In one embodiment, the control module 52 further includes:
the device control sub-module 522 is configured to send a control instruction to at least one device through a process corresponding to the internet of things function.
In one embodiment, the device control submodules 522 block are specifically configured to perform at least one of:
triggering a first thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through Bluetooth;
triggering a second thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through infrared;
triggering a third thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through a local area network;
and triggering a fourth thread through a process corresponding to the function of the Internet of things, and sending a control instruction to the equipment connected through the cloud end.
In one embodiment, the apparatus further comprises:
the presetting module 53 is configured to preset associated control information of each scene information, where the associated control information includes content to be played, devices to be controlled, an execution sequence, and an execution action.
In one embodiment, the apparatus further comprises:
the saving module 54 is configured to save the associated control information of each piece of scene information to the cloud.
In one embodiment, the obtaining module is specifically configured to send the scene information to a cloud; and receiving the associated control information of the scene information returned by the cloud.
In one embodiment, the apparatus further comprises a scene matching module 55 for performing at least one of:
recognizing the detected voice signal to obtain matched scene information;
detecting the temperature and/or humidity of the environment to obtain matched scene information;
detecting the temperature and/or humidity of the environment to obtain matched scene information;
identifying the human body signal to obtain matched scene information;
identifying the illumination intensity of the environment to obtain matched scene information;
detecting the pollution degree of the environment to obtain matched scene information;
and detecting and identifying any combination of voice signals, ambient temperature, ambient humidity, human body signals, ambient light intensity, ambient carbon dioxide concentration and ambient harmful gas concentration to obtain matched scene information.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
Fig. 7 is a block diagram of an electronic device according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 7, the electronic apparatus includes: one or more processors 901, memory 902, and interfaces for connecting the various components, including a high-speed interface and a low-speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories and multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). Fig. 7 illustrates an example of a processor 901.
Memory 902 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the scene-based control method provided herein. The non-transitory computer-readable storage medium of the present application stores computer instructions for causing a computer to execute the scene-based control method provided by the present application.
The memory 902, which is a non-transitory computer readable storage medium, may be used to store non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules (e.g., the acquisition module 51 and the control module 52 shown in fig. 5) corresponding to the scene-based control method in the embodiments of the present application. The processor 901 executes various functional applications of the server and data processing, i.e., implements the scene-based control method in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 902.
The memory 902 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device of the scene-based control method, and the like. Further, the memory 902 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory 902 may optionally include a memory remotely disposed from the processor 901, and these remote memories may be connected to the electronic device of the scene-based control method through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the scene-based control method may further include: an input device 903 and an output device 904. The processor 901, the memory 902, the input device 903 and the output device 904 may be connected by a bus or other means, and fig. 7 illustrates an example of a connection by a bus.
The input device 903 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic apparatus based on the scene control method, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointing stick, one or more mouse buttons, a track ball, a joystick, or other input devices. The output devices 904 may include a display device, auxiliary lighting devices (e.g., LEDs), tactile feedback devices (e.g., vibrating motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
In the embodiment of the application, the operation event corresponding to the operation behavior is converted into the script code, so that the operation behavior of the user on the equipment can be accurately restored through the script code.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (20)

1. A method of scene-based control, comprising:
acquiring associated control information of scene information, wherein the associated control information comprises content to be played and equipment to be controlled;
playing the content and controlling the device to perform the required actions.
2. The method of claim 1, wherein playing the content and controlling the device to perform the desired action comprises:
and in the process of playing the content, sending control instructions to at least one device in parallel.
3. The method of claim 2, wherein playing the content comprises:
and playing the content through the process corresponding to the content.
4. The method of claim 2, wherein sending control instructions to at least one of the devices comprises:
and sending a control instruction to at least one device through a process corresponding to the function of the Internet of things.
5. The method of claim 4, wherein sending the control instruction to the at least one device through a corresponding process of the internet of things comprises at least one of:
triggering a first thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through Bluetooth;
triggering a second thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through infrared;
triggering a third thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through a local area network;
and triggering a fourth thread through a process corresponding to the function of the Internet of things, and sending a control instruction to the equipment connected through the cloud end.
6. The method according to any one of claims 1 to 5, further comprising:
and presetting associated control information of each scene information, wherein the associated control information comprises content needing to be played, equipment needing to be controlled, an execution sequence and an execution action.
7. The method of claim 6, further comprising:
and storing the associated control information of each scene information to the cloud.
8. The method according to any one of claims 1 to 5, wherein obtaining the associated control information with the scene information comprises:
sending the scene information to a cloud;
and receiving the associated control information of the scene information returned by the cloud.
9. The method according to any one of claims 1 to 5, further comprising at least one of:
recognizing the detected voice signal to obtain matched scene information;
detecting the temperature and/or humidity of the environment to obtain matched scene information;
identifying the human body signal to obtain matched scene information;
identifying the illumination intensity of the environment to obtain matched scene information;
detecting the pollution degree of the environment to obtain matched scene information;
and detecting and identifying any combination of voice signals, ambient temperature, ambient humidity, human body signals, ambient light intensity, ambient carbon dioxide concentration and ambient harmful gas concentration to obtain matched scene information.
10. A scene-based control apparatus comprising:
the acquisition module is used for acquiring associated control information of scene information, wherein the associated control information comprises content to be played and equipment to be controlled;
and the control module is used for playing the content and controlling the equipment to execute the required action.
11. The apparatus according to claim 10, wherein the control module is specifically configured to send control commands to at least one of the devices in parallel during the playing of the content.
12. The apparatus of claim 11, wherein the control module comprises:
and the content playing submodule is used for playing the content through the process corresponding to the content.
13. The apparatus of claim 11, wherein the control module further comprises:
and the equipment control submodule is used for sending a control instruction to at least one piece of equipment through a process corresponding to the function of the Internet of things.
14. The apparatus of claim 13, wherein the device control sub-module is specifically configured to perform at least one of:
triggering a first thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through Bluetooth;
triggering a second thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through infrared;
triggering a third thread through a process corresponding to the function of the Internet of things, and sending a control instruction to equipment connected through a local area network;
and triggering a fourth thread through a process corresponding to the function of the Internet of things, and sending a control instruction to the equipment connected through the cloud end.
15. The apparatus of any one of claims 10 to 14, further comprising:
the preset module is used for presetting the associated control information of each scene information, wherein the associated control information comprises the content to be played, the equipment to be controlled, an execution sequence and an execution action.
16. The apparatus of claim 15, further comprising:
and the storage module is used for storing the associated control information of each scene information to the cloud.
17. The apparatus according to any one of claims 10 to 14, wherein the obtaining module is specifically configured to send the context information to a cloud; and receiving the associated control information of the scene information returned by the cloud.
18. The apparatus according to any one of claims 10 to 14, further comprising a scene matching module for performing at least one of:
recognizing the detected voice signal to obtain matched scene information;
detecting the temperature and/or humidity of the environment to obtain matched scene information;
identifying the human body signal to obtain matched scene information;
identifying the illumination intensity of the environment to obtain matched scene information;
detecting the pollution degree of the environment to obtain matched scene information;
and detecting and identifying any combination of voice signals, ambient temperature, ambient humidity, human body signals, ambient light intensity, ambient carbon dioxide concentration and ambient harmful gas concentration to obtain matched scene information.
19. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to 9.
20. A non-transitory computer readable storage medium storing computer instructions for causing a computer to perform the method of any one of claims 1 to 9.
CN202010970095.5A 2020-09-15 2020-09-15 Scene-based control method, device, equipment and storage medium Pending CN112034726A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010970095.5A CN112034726A (en) 2020-09-15 2020-09-15 Scene-based control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010970095.5A CN112034726A (en) 2020-09-15 2020-09-15 Scene-based control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112034726A true CN112034726A (en) 2020-12-04

Family

ID=73589396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010970095.5A Pending CN112034726A (en) 2020-09-15 2020-09-15 Scene-based control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112034726A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112558575A (en) * 2020-12-22 2021-03-26 珠海格力电器股份有限公司 Equipment linkage control method and device, storage medium and equipment
CN112738268A (en) * 2021-01-05 2021-04-30 青岛海尔科技有限公司 Equipment centralized control method and device based on user behaviors
CN113032267A (en) * 2021-03-30 2021-06-25 深圳Tcl新技术有限公司 Intelligent scene testing method and device, electronic equipment and storage medium
CN114584415A (en) * 2022-01-24 2022-06-03 杭州博联智能科技股份有限公司 Whole-house intelligent scene distributed implementation method, system, device and medium
CN114995166A (en) * 2021-03-02 2022-09-02 青岛海尔多媒体有限公司 Method and device for switching room scenes and electronic equipment
CN116300503A (en) * 2023-03-29 2023-06-23 广州市平可捷信息科技有限公司 Multithreading intelligent furniture control method and system
WO2023226768A1 (en) * 2022-05-23 2023-11-30 深圳绿米联创科技有限公司 Device control method and apparatus, device, and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202632077U (en) * 2012-05-24 2012-12-26 李强 Intelligent household master control host
CN106790628A (en) * 2016-12-31 2017-05-31 广东博意建筑设计院有限公司 Smart home house keeper central control system and its control method with body-sensing function
CN206833230U (en) * 2017-04-18 2018-01-02 青岛有屋科技有限公司 A kind of Intelligent household voice control system of achievable man-machine interaction
CN211043962U (en) * 2019-09-26 2020-07-17 星络智能科技有限公司 Intelligent household control system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202632077U (en) * 2012-05-24 2012-12-26 李强 Intelligent household master control host
CN106790628A (en) * 2016-12-31 2017-05-31 广东博意建筑设计院有限公司 Smart home house keeper central control system and its control method with body-sensing function
CN206833230U (en) * 2017-04-18 2018-01-02 青岛有屋科技有限公司 A kind of Intelligent household voice control system of achievable man-machine interaction
CN211043962U (en) * 2019-09-26 2020-07-17 星络智能科技有限公司 Intelligent household control system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩思奇 等: "LabVIEW虚拟仪器从入门到精通", 31 August 2020, pages: 205 - 209 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112558575A (en) * 2020-12-22 2021-03-26 珠海格力电器股份有限公司 Equipment linkage control method and device, storage medium and equipment
CN112738268A (en) * 2021-01-05 2021-04-30 青岛海尔科技有限公司 Equipment centralized control method and device based on user behaviors
CN114995166A (en) * 2021-03-02 2022-09-02 青岛海尔多媒体有限公司 Method and device for switching room scenes and electronic equipment
CN113032267A (en) * 2021-03-30 2021-06-25 深圳Tcl新技术有限公司 Intelligent scene testing method and device, electronic equipment and storage medium
CN113032267B (en) * 2021-03-30 2024-03-12 深圳Tcl新技术有限公司 Intelligent scene test method and device, electronic equipment and storage medium
CN114584415A (en) * 2022-01-24 2022-06-03 杭州博联智能科技股份有限公司 Whole-house intelligent scene distributed implementation method, system, device and medium
CN114584415B (en) * 2022-01-24 2023-11-28 杭州博联智能科技股份有限公司 Method, system, device and medium for realizing scene distribution of full house intelligence
WO2023226768A1 (en) * 2022-05-23 2023-11-30 深圳绿米联创科技有限公司 Device control method and apparatus, device, and storage medium
CN116300503A (en) * 2023-03-29 2023-06-23 广州市平可捷信息科技有限公司 Multithreading intelligent furniture control method and system
CN116300503B (en) * 2023-03-29 2023-08-18 广州市平可捷信息科技有限公司 Multithreading intelligent furniture control method and system

Similar Documents

Publication Publication Date Title
CN112034726A (en) Scene-based control method, device, equipment and storage medium
CN111276139B (en) Voice wake-up method and device
JP6965384B2 (en) Smart device wake-up methods, smart devices, and computer-readable storage media
CN111192591B (en) Awakening method and device of intelligent equipment, intelligent sound box and storage medium
CN112530419B (en) Speech recognition control method, device, electronic equipment and readable storage medium
KR20190024762A (en) Music Recommendation Method, Apparatus, Device and Storage Media
CN111261159B (en) Information indication method and device
US20210329101A1 (en) Creating a cinematic storytelling experience using network-addressable devices
CN110501918B (en) Intelligent household appliance control method and device, electronic equipment and storage medium
CN112929246B (en) Processing method of operation instruction, storage medium and user terminal
KR102331254B1 (en) Speech recognition control method, apparatus, electronic device and readable storage medium
CN110956963A (en) Interaction method realized based on wearable device and wearable device
JP6990728B2 (en) How to activate voice skills, devices, devices and storage media
CN111177453A (en) Method, device and equipment for controlling audio playing and computer readable storage medium
CN111048085A (en) Off-line voice control method, system and storage medium based on ZIGBEE wireless technology
US20240203416A1 (en) Combining Device or Assistant-Specific Hotwords in a Single Utterance
CN112133307A (en) Man-machine interaction method and device, electronic equipment and storage medium
CN110768877A (en) Voice control instruction processing method and device, electronic equipment and readable storage medium
JP7051800B2 (en) Voice control methods, voice control devices, electronic devices, and readable storage media
WO2022268136A1 (en) Terminal device and server for voice control
JP2022024110A (en) Voice recognition method, device, electronic apparatus and storage medium
CN109658924B (en) Session message processing method and device and intelligent equipment
CN113495621A (en) Interactive mode switching method and device, electronic equipment and storage medium
CN111160318B (en) Electronic equipment control method and device
CN115547321A (en) Service processing method and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210512

Address after: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant after: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

Applicant after: Shanghai Xiaodu Technology Co.,Ltd.

Address before: 100085 Baidu Building, 10 Shangdi Tenth Street, Haidian District, Beijing

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.