WO2022052864A1 - 场景的切换方法、终端和存储介质 - Google Patents
场景的切换方法、终端和存储介质 Download PDFInfo
- Publication number
- WO2022052864A1 WO2022052864A1 PCT/CN2021/116320 CN2021116320W WO2022052864A1 WO 2022052864 A1 WO2022052864 A1 WO 2022052864A1 CN 2021116320 W CN2021116320 W CN 2021116320W WO 2022052864 A1 WO2022052864 A1 WO 2022052864A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- target
- information corresponding
- switched
- scenario
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000003860 storage Methods 0.000 title claims abstract description 25
- 238000004590 computer program Methods 0.000 claims description 14
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 238000001816 cooling Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 238000007791 dehumidification Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 238000010408 sweeping Methods 0.000 description 3
- 238000010276 construction Methods 0.000 description 2
- 238000009432 framing Methods 0.000 description 2
- 230000004807 localization Effects 0.000 description 2
- 230000008447 perception Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003213 activating effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013480 data collection Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001704 evaporation Methods 0.000 description 1
- 238000010438 heat treatment Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000010257 thawing Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2816—Controlling appliance services of a home automation network by calling their functionalities
- H04L12/282—Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/28—Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
- H04L12/2803—Home automation networks
- H04L12/2805—Home Audio Video Interoperability [HAVI] networks
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/226—Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Definitions
- the present application relates to the technical field of voice interaction, and in particular, to a scene switching method, terminal and storage medium.
- far-field devices have been widely used. There may be multiple far-field devices in some users' homes.
- the far-field device When a user wakes up a far-field device through far-field voice, the far-field device It will automatically run according to the parameters preset by the user or the default parameters. For example: when the user is currently in the living room, and the wake-up word "small T small T" is Query, the far-field device in the living room will receive the wake-up word. On; when the user enters the room from the living room, Query the wake-up word "small T small T" again, and the far-field device in the room will be turned on.
- the far-field devices that are turned on in the new scenario still operate according to the parameters preset by the user or the default parameters.
- the far-field devices that are turned on in the new scenario still operate according to the parameters preset by the user or the default parameters.
- the embodiments of the present application aim to solve the problem that when the user moves from the first scene to the second scene, the far-field device that is turned on in the second scene is still preset according to the user's preset. parameters or default parameters to run the problem.
- the present application provides a scene switching method on the one hand, and the scene switching method includes the following steps:
- each target device is matched with each device to be switched.
- another aspect of the present application further provides a terminal, the terminal includes a memory, a processor, and a program for switching scenes stored in the memory and running on the processor, the processor executing all When describing the switching procedure of the scene, it is used to:
- each target device is matched with each device to be switched.
- another aspect of the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, it is used to realize:
- each target device is matched with each device to be switched.
- the voiceprint information of the user in the second scene is collected; according to the voiceprint information, each device to be switched in the first scene is determined, and all devices to be switched are obtained.
- the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
- the operating parameters corresponding to the information Therefore, when the user moves from the first scene to the second scene, the far-field device that is turned on in the second scene still operates according to the parameters preset by the user in the first scene or the default parameters.
- the cooperative relationship between the devices realizes the switching of the operating parameters of the devices between different scenarios.
- FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in a solution according to an embodiment of the present application
- FIG. 2 is a schematic flowchart of a first embodiment of a switching method in a scenario of the application
- FIG. 3 is a schematic flowchart of a second embodiment of a switching method in a scenario of the application
- FIG. 4 is a schematic flowchart of a third embodiment of a switching method in a scenario of the application.
- FIG. 5 is a schematic flowchart of determining whether a user has moved from a first scene to a second scene in the switching method of the application scene;
- FIG. 6 is a schematic flowchart of collecting the voiceprint information of the user in the second scene if it is the case in the switching method of the application scenario;
- FIG. 7 is a schematic flowchart of determining each device to be switched in the first scenario according to the voiceprint information in the switching method of the scenario of the application;
- FIG. 8 is a method for switching a scene of the application, when the voiceprint information in the first scene matches the voiceprint information in the second scene, the matching with the target device is obtained from each activated device Schematic flow chart after the step of determining the turned-on device that matches the target device as the to-be-switched device in the first scene;
- FIG. 9 is a schematic flowchart of determining the operating state information corresponding to each target device in the second scenario according to the operating state information corresponding to each device to be switched in the switching method in the scenario of the application;
- FIG. 10 is the switching method in the scenario of the application, in which the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the corresponding operating status information is configured for each target device.
- the main solutions of the embodiments of the present application are: determine whether the user moves from the first scene to the second scene; if so, collect the voiceprint information of the user in the second scene; The voiceprint information determines each device to be switched in the first scene, and obtains operating status information corresponding to each device to be switched; and determines the second scenario according to the operating status information corresponding to each device to be switched The respective operating status information corresponding to each target device in the above; wherein, each target device matches the each to-be-switched device.
- the far-field devices that are turned on in the new scene still operate according to the parameters preset by the user or the default parameters.
- the voiceprint information of the user in the second scene is collected; each device to be switched in the first scene is determined according to the voiceprint information, and all The operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
- the operating parameters corresponding to the information are realized.
- FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in the solution of an embodiment of the present application.
- the terminal may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
- the communication bus 1002 is used to realize the connection and communication between these components.
- the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
- the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
- the terminal may further include a camera, an RF (Radio Frequency, radio frequency) circuit, a sensor, a remote control, an audio circuit, a WiFi module, a detector, and the like.
- the terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a temperature sensor, etc., which will not be repeated here.
- the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal device, and may include more or less components than the one shown, or combine some components, or arrange different components.
- the memory 1005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a scene switching program.
- the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
- the user interface 1003 is mainly used to connect to the client (client) and perform data communication with the client;
- the processor 1001 can be used to invoke the switching program of the scene stored in memory 1005 and perform the following operations:
- each target device is matched with each device to be switched.
- FIG. 2 is a schematic flowchart of a first embodiment of a method for switching a scenario of the present application.
- the method for switching a scenario includes the following steps:
- Step S10 determining whether the user moves from the first scene to the second scene
- the multiple far-field devices constitute a wireless voice interaction system.
- the user selects a far-field device as the master device in the wireless voice interaction system, and uses other far-field devices as slave devices, and the master device and the slave devices are connected wirelessly, or by setting Wireless hotspot connection in the environment.
- the scenes include but are not limited to living rooms, rooms (bedrooms), kitchens, etc.; the first scene refers to the scene where the user is before entering the current scene; the second scene refers to the current scene where the user is located scene.
- the step of determining whether the user moves from the first scene to the second scene includes:
- Step S11 obtaining the position information corresponding to each of the activated devices in the first scene and the position information corresponding to each of the target devices in the second scene;
- Step S12 judging whether the location information corresponding to each enabled device is the same as the location information corresponding to each target device.
- the far-field device in each scenario has a built-in positioning module.
- the main control device obtains the network identification parameters of the far-field device through the network connected to the far-field device, and based on the far-field device
- the satellite positioning module of the device obtains the satellite positioning information of the far-field device; further, according to the network identification parameter and in combination with the satellite positioning information, obtains the position information of the far-field device, and the user can be determined according to the position information the current scene.
- the same method is used to obtain the position information corresponding to each activated device in the first scene and the position information corresponding to each target device in the second scene, and further determine the position information corresponding to each activated device and each target device. Whether the location information is the same, it is determined whether the scene where the user is located has changed, that is, whether the user has moved from the first scene to the second scene.
- the target device is a wake-up device in the second scenario, and the wake-up device refers to a device that has not acquired running parameters.
- Step S20 if yes, collect the voiceprint information of the user in the second scene
- the step of collecting the voiceprint information of the user in the second scene includes:
- Step S21 if the position information corresponding to each activated device is different from the position information corresponding to each target device, it is determined that the user is transferred from the first scene to the second scene;
- Step S22 detecting whether a scene switching instruction is received
- Step S23 if the scene switching instruction is received, collect the voiceprint information of the user in the second scene.
- the slave device detects in real time whether a scene switching instruction sent by the user through far-field voice is received, and if a scene switching instruction sent by the user is received, the user's voiceprint information in the second scene is further obtained.
- the built-in voice detection unit of the slave device detects the voice signal within the working range of the slave device in real time.
- the voice information of the user corresponding to the scene switching instruction is obtained, and voiceprint recognition is performed on the voice information, so as to extract the characteristic information of the voiceprint, based on the The feature information of the voiceprint obtains the user's voiceprint information.
- the remote device in the second scene is turned on based on the far-field control instruction sent by the user.
- the slave device uses the built-in voice detection unit to detect the slave device in real time. The voice signal within the working range of the device is detected.
- the "small T small T" wake-up word sent by the user through far-field voice is received, multiple far-field devices in this scenario are turned on.
- the wake-up word is based on the user's needs. It is preset and not limited here. Once the slave device detects the wake-up word, it will perform voice wake-up processing to wake up the algorithm unit in the standby state.
- the voice detection unit After the algorithm unit is awakened, that is, after the standby state is converted from the standby state to the active state, the voice detection unit will acquire the voice signal. correspondingly, the algorithm unit can perform arithmetic processing on the acquired voice signal according to a predetermined method, including echo cancellation, reverberation cancellation, sound source localization, etc., and finally obtain a clear voice signal, which is transmitted to the Control system for smart devices.
- the control system of the smart device will upload the acquired voice signal to the server/cloud, so that the server/cloud can perform voice recognition on the acquired voice signal, and generate a corresponding opening command according to the voice recognition result, and return it to the smart device.
- the control system, the control system of the smart device turns on the far-field device in the scene according to the turn-on instruction.
- the voice detection unit may be a low-power microphone unit with a wake-up function.
- Low power consumption means that the power consumption of the microphone unit is very low. By using such a microphone, energy consumption can be saved.
- the microphone unit may be a microphone array including at least two microphones, and the use of a plurality of microphones can improve the collection sensitivity of the microphone unit for voice signals.
- a microphone can be set at the left, center, and right positions below the far-field device, so that the user can better capture the Voice signal from the user.
- the algorithm unit in the standby state can be woken up. For example, when any microphone detects a wake-up word, a wake-up signal (interrupt signal) can be sent to the algorithm unit, thereby activating the algorithm unit to perform operation functions such as echo cancellation, reverberation cancellation, and sound source localization.
- Step S30 Determine each device to be switched in the first scene according to the voiceprint information, and obtain operating status information corresponding to each device to be switched;
- the operating state information includes an on state, an off state, a recovery state, etc., wherein, based on different devices, the operating state information is also different, for example, the operating state information of an air conditioner includes a cooling operation state, a heating operation Status, dehumidification running status, and defrosting running status, etc.; the running status information of the fan includes the natural wind running mode, the running status of the wind speed gear, whether it is timed or not.
- the master control device obtains each enabled device in the first scene according to the voiceprint information, and further obtains the operating status information corresponding to each enabled device in the first scene from the storage module, for example: the master control device receives the enabled device after receiving When the acquisition instruction of the running state information is obtained, wherein, the acquisition instruction is triggered by the main control device after receiving the voiceprint information of the user in the second scene; based on the acquisition instruction, each item in the first scene is acquired from the storage module.
- the state information corresponding to the devices that are turned on, the state information is that the slave device collects data from each of the far-field devices that are turned on through the data collection module, and sends the collected state information to the master control device for storage.
- the state information of each enabled device in the first scene is matched with the device information corresponding to each target device in the second scene, and each device to be switched in the first scene is determined based on the matching operation, and each device to be switched is obtained. Corresponding operating status information. Therefore, referring to FIG. 7 , the step of determining each device to be switched in the first scene according to the voiceprint information in the second scene includes:
- Step S31 acquiring device information corresponding to each enabled device in the first scenario and device information corresponding to each of the target devices in the second scenario;
- Step S32 matching the voiceprint information in the first scene with the voiceprint information in the second scene, and matching the device information corresponding to each activated device and the device corresponding to each target device respectively information to match;
- Step S33 when the voiceprint information in the first scene matches the voiceprint information in the second scene, obtain the activated device that matches the target device in each activated device, and determining the enabled device that matches the target device as the to-be-switched device in the first scenario.
- the master control device When acquiring the device information corresponding to each enabled device in the first scene and the device information corresponding to each of the target devices in the second scene, the master control device first compares the voiceprint information in the first scene with the second scene. If the voiceprint information in the first scene does not match the voiceprint information in the second scene, it means that the user who turned on the far-field device in the first scene and the far-field device in the second scene wake up The users in the first scene are not the same person, so it is necessary to send voice prompt information to remind the user that the current voiceprint information does not match; if the voiceprint information in the first scene matches the voiceprint information in the second scene, the main control device will further The device information corresponding to each enabled device is matched with the device information corresponding to each target device, wherein the device information includes device type, device capability, device usage time, etc.; if the device information corresponding to each enabled device is If the device information corresponding to each target device does not match, a voice prompt message is sent to prompt the user to confirm the current scene that needs
- the enabled device that matches the target device is obtained from each enabled device, and the enabled device that matches the target device is obtained.
- the device that is turned on is determined to be the device to be switched in the first scenario. Among them, when matching, the device that is turned on in the first scene and the device that is awakened in the second scene are matched; If there are air conditioners, lamps, and speakers in the wake-up device, the air conditioners, lamps, and speakers in the first scene and the second scene are matched.
- the storage module of the main control device stores the operating status information corresponding to each enabled device in the first scenario, as shown in Table 1:
- Table 1 only lists the storage of some equipment information, wherein the equipment information also includes other operating states and operating parameters, which will not be listed one by one here.
- the corresponding operating status and operating parameter information of each device to be switched in the first scenario can be obtained.
- the air conditioner is in the cooling mode, the operating cooling temperature is 26°C, the wind speed gear is mid-range, and the dehumidification mode is turned on;
- the current brightness level of the lamp is mid-range, the light mode is soft light mode;
- the volume of the audio is adjusted to 60%, and the playback mode is Bluetooth playback mode.
- the voiceprint information in the first scene is described.
- the method further includes:
- Step S320 if the device to be switched in the first scene has a video playback device, acquire the playback content and playback progress of the video playback device;
- Step S321 sending the playback content and the playback progress to the video playback device in the second scene, so that the video playback device in the second scene displays the playback content according to the playback progress .
- the main control device detects that there is a video playback device in the device to be switched in the first scene, such as a TV, it obtains the current content of the TV and the progress information of the playback, and when performing the scene switching operation, the previously played content and The playing progress information is sent to the TV in the second scene, so that the TV in the second scene displays the playing content according to the playing progress.
- the TV in the first scene broadcasts the content of receiving the award after the Chinese women's volleyball team of the CCTV-5 sports channel won the championship, and the broadcast progress is the second minute of the award acceptance.
- the TV in the second scene The content of the Chinese women's volleyball team receiving the award after winning the championship on the CCTV-5 sports channel is also broadcast, and it starts to be broadcast in the second minute of the award receiving process.
- Step S40 according to the operating state information corresponding to each device to be switched, determine the operating state information corresponding to each target device in the second scenario; wherein each target device matches each device to be switched .
- the operating state information corresponding to each target device in the second scenario is determined according to the operating state information corresponding to each device to be switched in the first scenario; wherein, each target device matches each device to be switched means that the target device matches the The device types, device capabilities, and device usage time of the devices to be switched all match.
- the main control device determines the operating parameters corresponding to each target device according to the operating parameters in the operating status information corresponding to each device to be switched in the first scenario.
- Operating state information corresponding to each device to be switched, respectively, and the step of determining the operating state information corresponding to each target device in the second scenario includes:
- Step S41 Send the operating state information corresponding to each device to be switched to each target device in the second scenario, and configure the operating parameters corresponding to the operating state information for each target device.
- the main control device obtains its corresponding operating parameters according to the operating state information corresponding to each device to be switched in the first scenario, where the operating parameters include operating condition parameters and operating state parameters. Some or all of temperature, indoor temperature and outdoor temperature, and the operating state parameters include some or all of exhaust temperature, operating current, exhaust pressure, evaporating temperature and condensing temperature.
- the master control device distributes the acquired operating parameters corresponding to each device to be switched to each corresponding awakened device in the second scenario, and each awakened target device in the second scenario receives the operating parameters according to the The run parameter sets the current run.
- the operating cooling temperature of the air conditioner to be switched is currently 26°C
- the wind speed is mid-range
- the sweeping mode is up and down sweeping
- the lamps to be switched are warm light mode
- the brightness gear is mid-range
- the fan to be switched is
- the wind speed is the third gear
- the swing mode is left and right swing.
- the air conditioner in the second scenario operates at a cooling temperature of 26°C, a mid-range wind speed, and a sweeping wind.
- the way is to sweep the wind up and down; the lamp tube is in the warm light mode, and the brightness gear is in the middle range; the fan speed is in the third gear, and the swing mode is left and right swing.
- the operation state information corresponding to each device to be switched is sent to each target device in the second scenario, and the corresponding operation state information is configured for each target device.
- the parameter steps including:
- Step S42 Receive the result information fed back by the respective target devices in the second scenario, and determine whether the respective operating status information corresponding to the devices to be switched in the first scenario is successfully switched to the each of the target devices in the second scenario;
- Step S43 if the respective operating status information corresponding to each device to be switched is successfully switched to each target device in the second scenario, send a control instruction to close each device to be switched in the first scenario;
- Step S44 if the respective operating status information corresponding to each device to be switched is not successfully switched to each target device in the second scenario, then repeatedly sending the operating status information corresponding to each device to be switched to the target device. Steps of each target device in the second scenario.
- each awakened device in the second scene will send the result information of the operation parameter switching to the main control device, and the result information includes the operation status information, operation parameter information, etc. of each awakened device;
- the main control device determines whether the respective operating status information corresponding to each device to be switched in the first scenario is successfully switched to each target device in the second scenario, for example: judging that each device to be switched in the first scenario corresponds to Whether the operating status information corresponding to each target device in the second scenario is the same as the operating status information corresponding to each target device in the second scenario, if they are the same, it means that the operating status information corresponding to each device to be switched is successfully switched to each target device in the second scenario.
- the main control device sends a control command to close each device to be switched in the first scene; if it is not the same, it means that the operating status information corresponding to each device to be switched has not been successfully switched to each target device in the second scene, then repeat the operation of each device to be switched.
- the construction conditions of the scene need to be determined, and the construction of the scene includes the following conditions:
- the master control device includes a storage module and a matching module.
- the storage module is used to store the data sent by the slave device.
- the matching module is used to match the voiceprint information and device information in the first scene and the second scene; the slave devices include smart air conditioners, smart TVs, smart fans, and smart audio equipment;
- the UDP (User Datagram Protocol) is a user datagram protocol, a connectionless transport layer protocol in the OSI reference model, providing a simple and reliable transaction-oriented information transmission service;
- the voiceprint information of the user in the second scene is collected; according to the voiceprint information, each device to be switched in the first scene is determined, and all devices to be switched are obtained.
- the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
- the operating parameters corresponding to the information By establishing a cooperative relationship between far-field devices, the operating parameters of far-field devices can be switched between different user scenarios. In addition, by sending far-field voices, the far-field devices are automatically turned on, reducing the user's operational complexity and realizing far-field The intelligent perception and intelligent collaboration of devices bring users a more comfortable and convenient home environment.
- FIG. 3 a second embodiment of the switching method for the scenario of the present application is proposed.
- the difference between the second embodiment of the scene switching method and the first embodiment of the scene switching method is that before the step of determining whether the user moves from the first scene to the second scene, the method includes:
- Step S13 acquiring the device startup instruction in the first scene, and acquiring the voiceprint information in the first scene according to the device startup instruction;
- Step S14 associating the voiceprint information in the first scene with the first scene.
- the slave device When the slave device receives the wake-up word "small T small T" sent by the user through the far-field voice, wake up multiple far-field devices in the first scene according to the wake-up word, and configure and run the wake-up far-field devices respectively. parameters to turn on each wake-up far-field device, and further the master control device obtains information such as device capability and device status corresponding to each turned-on device; at the same time, preprocesses the obtained user voice to remove non-voice signals and silence.
- MFCC Mel frequency cepstral coefficient
- pre-enhancement that is, differential voice signal
- Framing Framing the speech data
- Hamming window adding a window to the signal of each frame to reduce the influence of the Gibbs effect
- Fast Fourier Transform Transforming the time domain signal into the power spectrum of the signal
- Triangular bandpass filter The range covered by the triangular filter is similar to a critical bandwidth of the human ear, in order to simulate the masking effect of the human ear
- discrete cosine transform remove the correlation between the various dimensional signals and map the signal to a low-dimensional space.
- the speech dynamic characteristic parameters are obtained from the extracted MFCC parameters as the user's voiceprint feature information, so as to obtain the user's voiceprint information in the first scene.
- the slave device associates the acquired voiceprint information in the first scene with the first scene. For example, if the scene where the current user is located is the living room, then bind the currently acquired voiceprint information in the living room to the living room scene. This makes it possible to know that the user's previous scene is the living room based on the voiceprint information in the first scene when the user enters another scene.
- the voiceprint information can also be associated with the user's personal information. For example, for each family member in the family, the user information and voiceprint feature information of each family member, such as grandpa, grandma, father, mother, and children, are collected respectively. Associating user information with user voiceprint feature information, for example, associating dad's user information with dad's user voiceprint feature information. Further, the master device obtains the voiceprint information sent by the slave device and the information bound to the scene (such as the living room scene + voiceprint object) and the device status information in the scene, and stores the obtained information in the corresponding storage unit.
- FIG. 4 a third embodiment of the switching method of the scenario of the present application is proposed.
- the difference between the third embodiment of the scene switching method and the first embodiment and the second embodiment of the scene switching method is that if the scene switching instruction is received, the user is collected in the second scene.
- the steps in the voiceprint information include:
- Step S230 if multiple scene switching instructions are received, acquire voiceprint information corresponding to each of the scene switching instructions;
- Step S231 respectively matching the voiceprint information corresponding to each of the scene switching instructions with each target voiceprint information, and acquiring the voiceprint information matching the target voiceprint information;
- Step S232 if the voiceprint information corresponding to each of the scene switching instructions corresponds to the voiceprint information that matches the target voiceprint information, determine the corresponding voiceprint information that matches the target information.
- the scene switching instruction is a target scene switching instruction, and the user corresponding to the target scene switching instruction is the target user;
- Step S233 Collect the scene switching instruction of the target user and use it as the voiceprint information of the user in the second scene.
- the slave device sends multiple scene switching instructions to the master device, and the master device instructs the corresponding voice information from each scene.
- the voiceprint information of the user is extracted from the system, and the main control device matches the extracted voiceprint information with each target voiceprint information in turn; the voiceprint information, then determine that the scene switching instruction corresponding to the voiceprint information matching the target information is the target scene switching instruction, and the user corresponding to the target scene switching instruction is the target user; collect the scene switching instruction of the target user and use it as the target user.
- Voiceprint information of the user in the second scene is
- a registered voiceprint library can also be pre-built, and different users can register their own voices in advance.
- registered users register voices on the setting interface of the smart device, and send out voices within the range that the smart device can collect voices.
- Voice After the smart device collects the voice of the registered user, it uses the voiceprint model to extract the registered voiceprint feature information according to the registered user's voice, and stores the registered user's registered voiceprint feature information in the registered voiceprint database.
- the voiceprint model is pre-built, and the parameters of the extracted voiceprint feature information are the same for the voices of different users.
- the voice issued by the user can be any sentence or specified words. The content is set by the user.
- the voiceprint feature information of the target user can be quickly obtained, and at the same time, it is also possible to query whether the received voiceprint information of multiple users is stored in the voiceprint database in advance.
- the texture database the corresponding voiceprint feature information is directly obtained, and the comparison operation with the target voiceprint feature information is performed, thereby quickly determining the target user and shortening the time of the matching operation.
- voiceprint information corresponding to each scene switching instruction is acquired, and the voiceprint information corresponding to each scene switching instruction is matched with each target voiceprint information respectively.
- the target user corresponding to the target scene switching instruction is determined, so that the voiceprint information corresponding to the target user can be collected in time.
- the present application also provides a terminal, the terminal includes a memory, a processor, and a program for switching scenes stored in the memory and running on the processor, when the terminal receives a device startup instruction sent by a user through far-field voice , turn on multiple far-field devices in the home based on the turn-on instruction, and determine the current scene of the user according to the location information corresponding to each turned-on far-field device, and further obtain the user's voiceprint information in the scene and the corresponding corresponding to each turned-on device.
- device information (such as device operating parameters, device type, device capabilities, etc.); when the user transfers from the first scene to the second scene, if a scene switching instruction sent by the user through far-field voice is received, the second scene is obtained.
- This embodiment realizes switching of user scenarios by sending far-field voices, reduces the operation complexity of users, realizes intelligent perception and intelligent collaboration of far-field devices, and brings users a more comfortable and convenient home environment.
- the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the switching method of the above scenario are implemented.
- the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
- computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
- These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
- An apparatus implements the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.
- These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in one or more of the flowcharts and/or one or more blocks of the block diagrams.
- any reference signs placed between parentheses shall not be construed as limiting the claim.
- the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
- the word “a” or “an” preceding an element does not preclude the presence of a plurality of such elements.
- the present application may be implemented by means of hardware comprising several different components and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware.
- the use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.
Abstract
Description
设备名称 | 工作状态 | 运行状态和运行参数 |
空调 | 开启 | 制冷运行(26℃);风速档位(中档);开启除湿 |
风扇 | 开启 | 运行模式(自然风);风速档位(中档);摇摆方式(左右摇摆) |
灯管 | 开启 | 亮度档位(中档);灯光模式(柔光模式) |
音响 | 开启 | 音量大小(60%);播放模式(蓝牙播放模式) |
Claims (20)
- 一种场景的切换方法,其中,所述方法包括:确定用户是否从第一场景移动至第二场景;若是,则采集所述用户在所述第二场景中的声纹信息;根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;其中,所述各个目标设备与所述各个待切换设备相匹配。
- 根据权利要求1所述的场景的切换方法,其中,所述根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备的步骤包括:获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
- 根据权利要求2所述的场景的切换方法,其中,所述根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息的步骤包括:将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。
- 根据权利要求1所述的场景的切换方法,其中,所述确定用户是否从第一场景移动至第二场景的步骤包括:获取所述第一场景中所述各个被开启设备分别对应的位置信息以及所述第二场景中所述各个目标设备分别对应的位置信息;判断所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息是否相同。
- 根据权利要求1所述的场景的切换方法,其中,所述若是,则采集所述用户在所述第二场景中的声纹信息的步骤包括:若所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息不相同,则确定用户从所述第一场景转移至所述第二场景;检测是否接收到场景切换指令;若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息。
- 根据权利要求3所述的场景的切换方法,其中,所述将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数的步骤之后,包括:接收所述第二场景中所述各个目标设备反馈的结果信息,基于所述结果信息判断所述第一场景中所述各个待切换设备分别对应的运行状态信息是否成功切换至所述第二场景中的所述各个目标设备;若所述各个待切换设备分别对应的运行状态信息成功切换至所述第二场景中的所述各个目标设备,则发送控制指令关闭所述第一场景中所述各个待切换设备;若所述各个待切换设备分别对应的运行状态信息未成功切换至所述第二场景中的所述各个目标设备,则重复将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备的步骤。
- 根据权利要求1所述的场景的切换方法,其中,所述确定用户是否从第一场景移动至第二场景的步骤之前,包括:获取所述第一场景中的设备开启指令,根据所述设备开启指令获取所述第一场景中的所述声纹信息;将所述第一场景中的所述声纹信息与所述第一场景相关联。
- 根据权利要求5所述的场景的切换方法,其中,所述若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息的步骤包括:若接收到多个所述场景切换指令,则获取各个所述场景切换指令分别对应的声纹信息;将各个所述场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配,获取与所述目标声纹信息匹配的所述声纹信息;若各个所述场景切换指令分别对应的声纹信息中存在与所述各个目标声纹信息匹配的所述声纹信息,则确定与所述目标信息匹配的所述声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;采集所述目标用户的所述场景切换指令,并作为所述用户在所述第二场景中的声纹信息。
- 根据权利要求2所述的场景的切换方法,其中,所述在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备的步骤之后,还包括:若所述第一场景中所述待切换设备存在视频播放设备,则获取所述视频播放设备的播放内容和播放进度;将所述播放内容和所述播放进度发送至所述第二场景中的视频播放设备,以使所述第二场景中的视频播放设备根据所述播放进度对所述播放内容进行显示。
- 一种终端,其中,包括存储器、处理器及存储在存储器上并在处理器上运行的场景的切换的程序,所述处理器执行所述场景的切换程序时用于:确定用户是否从第一场景移动至第二场景;若是,则采集所述用户在所述第二场景中的声纹信息;根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;其中,所述各个目标设备与所述各个待切换设备相匹配。
- 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
- 根据权利要求11所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。
- 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:获取所述第一场景中所述各个被开启设备分别对应的位置信息以及所述第二场景中所述各个目标设备分别对应的位置信息;判断所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息是否相同。
- 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:若所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息不相同,则确定用户从所述第一场景转移至所述第二场景;检测是否接收到场景切换指令;若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息。
- 根据权利要求12所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:接收所述第二场景中所述各个目标设备反馈的结果信息,基于所述结果信息判断所述第一场景中所述各个待切换设备分别对应的运行状态信息是否成功切换至所述第二场景中的所述各个目标设备;若所述各个待切换设备分别对应的运行状态信息成功切换至所述第二场景中的所述各个目标设备,则发送控制指令关闭所述第一场景中所述各个待切换设备;若所述各个待切换设备分别对应的运行状态信息未成功切换至所述第二场景中的所述各个目标设备,则重复将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备的步骤。
- 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:获取所述第一场景中的设备开启指令,根据所述设备开启指令获取所述第一场景中的所述声纹信息;将所述第一场景中的所述声纹信息与所述第一场景相关联。
- 根据权利要求14所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:若接收到多个所述场景切换指令,则获取各个所述场景切换指令分别对应的声纹信息;将各个所述场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配,获取与所述目标声纹信息匹配的所述声纹信息;若各个所述场景切换指令分别对应的声纹信息中存在与所述各个目标声纹信息匹配的所述声纹信息,则确定与所述目标信息匹配的所述声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;采集所述目标用户的所述场景切换指令,并作为所述用户在所述第二场景中的声纹信息。
- 根据权利要求11所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:若所述第一场景中所述待切换设备存在视频播放设备,则获取所述视频播放设备的播放内容和播放进度;将所述播放内容和所述播放进度发送至所述第二场景中的视频播放设备,以使所述第二场景中的视频播放设备根据所述播放进度对所述播放内容进行显示。
- 一种计算机可读存储介质,其中,其上存储有计算机程序,所述计算机程序被处理器执行时用于实现:确定用户是否从第一场景移动至第二场景;若是,则采集所述用户在所述第二场景中的声纹信息;根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;其中,所述各个目标设备与所述各个待切换设备相匹配。
- 根据权利要求19所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2023516779A JP2023541636A (ja) | 2020-09-14 | 2021-09-02 | シーンの切り替え方法、端末及び記憶媒体 |
GB2305357.2A GB2616133A (en) | 2020-09-14 | 2021-09-02 | Scene switching method, terminal and storage medium |
US18/121,180 US20230291601A1 (en) | 2020-09-14 | 2023-03-14 | Scene switching |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010965434.0A CN112104533B (zh) | 2020-09-14 | 2020-09-14 | 场景的切换方法、终端和存储介质 |
CN202010965434.0 | 2020-09-14 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/121,180 Continuation US20230291601A1 (en) | 2020-09-14 | 2023-03-14 | Scene switching |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022052864A1 true WO2022052864A1 (zh) | 2022-03-17 |
Family
ID=73759014
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/116320 WO2022052864A1 (zh) | 2020-09-14 | 2021-09-02 | 场景的切换方法、终端和存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20230291601A1 (zh) |
JP (1) | JP2023541636A (zh) |
CN (1) | CN112104533B (zh) |
GB (1) | GB2616133A (zh) |
WO (1) | WO2022052864A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112104533B (zh) * | 2020-09-14 | 2023-02-17 | 深圳Tcl数字技术有限公司 | 场景的切换方法、终端和存储介质 |
CN112954113B (zh) * | 2021-01-15 | 2022-05-24 | 北京达佳互联信息技术有限公司 | 一种场景切换方法、装置、电子设备和存储介质 |
CN114900505B (zh) * | 2022-04-18 | 2024-01-30 | 广州市迪士普音响科技有限公司 | 一种基于web的音频场景定时切换方法、装置及介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104601694A (zh) * | 2015-01-13 | 2015-05-06 | 小米科技有限责任公司 | 操作控制方法、终端、中继设备、智能设备及装置 |
CN106878762A (zh) * | 2015-12-14 | 2017-06-20 | 北京奇虎科技有限公司 | 一种实现终端设备切换的方法、装置、服务器及系统 |
CN107205217A (zh) * | 2017-06-19 | 2017-09-26 | 广州安望信息科技有限公司 | 基于智能音箱场景组网的无间断内容推送方法及系统 |
CN110674482A (zh) * | 2019-08-13 | 2020-01-10 | 武汉攀升鼎承科技有限公司 | 一种多场景应用计算机 |
CN111312235A (zh) * | 2018-12-11 | 2020-06-19 | 阿里巴巴集团控股有限公司 | 一种语音交互方法、装置及系统 |
US20200275225A1 (en) * | 2014-01-17 | 2020-08-27 | Proctor Consulting, LLC | Smart hub |
CN112104533A (zh) * | 2020-09-14 | 2020-12-18 | 深圳Tcl数字技术有限公司 | 场景的切换方法、终端和存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104616675A (zh) * | 2013-11-05 | 2015-05-13 | 华为终端有限公司 | 一种切换播放设备的方法及移动终端 |
CN104142659B (zh) * | 2013-11-12 | 2017-02-15 | 珠海优特物联科技有限公司 | 一种智能家居场景切换方法及系统 |
CN103984579B (zh) * | 2014-05-30 | 2018-04-13 | 满金标 | 多设备间分享当前应用程序实时运行状态的方法 |
CN106850940A (zh) * | 2016-11-29 | 2017-06-13 | 维沃移动通信有限公司 | 一种状态的切换方法及移动终端 |
CN109597313A (zh) * | 2018-11-30 | 2019-04-09 | 新华三技术有限公司 | 场景切换方法及装置 |
CN110010127A (zh) * | 2019-04-01 | 2019-07-12 | 北京儒博科技有限公司 | 场景切换方法、装置、设备和存储介质 |
CN110769280A (zh) * | 2019-10-23 | 2020-02-07 | 北京地平线机器人技术研发有限公司 | 一种接续播放文件的方法及装置 |
-
2020
- 2020-09-14 CN CN202010965434.0A patent/CN112104533B/zh active Active
-
2021
- 2021-09-02 JP JP2023516779A patent/JP2023541636A/ja active Pending
- 2021-09-02 WO PCT/CN2021/116320 patent/WO2022052864A1/zh active Application Filing
- 2021-09-02 GB GB2305357.2A patent/GB2616133A/en active Pending
-
2023
- 2023-03-14 US US18/121,180 patent/US20230291601A1/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200275225A1 (en) * | 2014-01-17 | 2020-08-27 | Proctor Consulting, LLC | Smart hub |
CN104601694A (zh) * | 2015-01-13 | 2015-05-06 | 小米科技有限责任公司 | 操作控制方法、终端、中继设备、智能设备及装置 |
CN106878762A (zh) * | 2015-12-14 | 2017-06-20 | 北京奇虎科技有限公司 | 一种实现终端设备切换的方法、装置、服务器及系统 |
CN107205217A (zh) * | 2017-06-19 | 2017-09-26 | 广州安望信息科技有限公司 | 基于智能音箱场景组网的无间断内容推送方法及系统 |
CN111312235A (zh) * | 2018-12-11 | 2020-06-19 | 阿里巴巴集团控股有限公司 | 一种语音交互方法、装置及系统 |
CN110674482A (zh) * | 2019-08-13 | 2020-01-10 | 武汉攀升鼎承科技有限公司 | 一种多场景应用计算机 |
CN112104533A (zh) * | 2020-09-14 | 2020-12-18 | 深圳Tcl数字技术有限公司 | 场景的切换方法、终端和存储介质 |
Also Published As
Publication number | Publication date |
---|---|
US20230291601A1 (en) | 2023-09-14 |
CN112104533B (zh) | 2023-02-17 |
GB202305357D0 (en) | 2023-05-24 |
GB2616133A (en) | 2023-08-30 |
JP2023541636A (ja) | 2023-10-03 |
CN112104533A (zh) | 2020-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022052864A1 (zh) | 场景的切换方法、终端和存储介质 | |
US10861461B2 (en) | LED design language for visual affordance of voice user interfaces | |
EP3455720B1 (en) | Led design language for visual affordance of voice user interfaces | |
CN108831448A (zh) | 语音控制智能设备的方法、装置及存储介质 | |
DE202017107611U1 (de) | Bauform für kompakten Heimassistenten mit kombiniertem Schallwellenleiter und Kühlkörper | |
JP7393526B2 (ja) | イベントクリップを提供するための、方法、電子装置、サーバシステム、およびプログラム | |
CN108520746A (zh) | 语音控制智能设备的方法、装置及存储介质 | |
WO2019134473A1 (zh) | 语音识别系统、方法和装置 | |
CN113728685A (zh) | 用于唤醒媒体回放系统中的处理器的电源管理技术 | |
CN111077785A (zh) | 一种唤醒方法、装置、终端及存储介质 | |
WO2023071454A1 (zh) | 场景同步方法、装置、电子设备及可读存储介质 | |
CN112433836A (zh) | 应用程序自动唤醒方法、装置和计算机设备 | |
CN109979495B (zh) | 基于人脸识别的音频进度智能跟随播放方法及系统 | |
CN112671623B (zh) | 基于投影的叫醒方法、装置、投影设备和计算机存储介质 | |
CN112735403B (zh) | 一种基于智能音响的智能家居控制系统 | |
WO2023231894A1 (zh) | 基于协同纠错的唤醒方法、装置及系统、介质、设备 | |
CN113053371A (zh) | 语音控制系统和方法、语音套件、骨传导及语音处理装置 | |
CN115035894B (zh) | 一种设备响应方法和装置 | |
CN114578705B (zh) | 一种基于5g物联网智能家居控制系统 | |
WO2024021587A1 (zh) | 情景点控装置及空气调节控制系统 | |
CN117193028A (zh) | 智能设备的控制方法和控制装置 | |
CN112997453B (zh) | 根据激活的光设置选择传感器信号的目的地 | |
CN114500493B (zh) | 一种物联网设备的控制方法、终端及计算机可读存储介质 | |
CN111048081B (zh) | 一种控制方法、装置、电子设备及控制系统 | |
CN116403575A (zh) | 免唤醒语音交互方法、装置、存储介质及电子装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21865918 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2023516779 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 202305357 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20210902 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 03/07/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21865918 Country of ref document: EP Kind code of ref document: A1 |