WO2022052864A1 - 场景的切换方法、终端和存储介质 - Google Patents

场景的切换方法、终端和存储介质 Download PDF

Info

Publication number
WO2022052864A1
WO2022052864A1 PCT/CN2021/116320 CN2021116320W WO2022052864A1 WO 2022052864 A1 WO2022052864 A1 WO 2022052864A1 CN 2021116320 W CN2021116320 W CN 2021116320W WO 2022052864 A1 WO2022052864 A1 WO 2022052864A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
information corresponding
switched
scenario
Prior art date
Application number
PCT/CN2021/116320
Other languages
English (en)
French (fr)
Inventor
张毅
赵晓东
刘晓峰
陈涛
罗清刚
Original Assignee
深圳Tcl数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl数字技术有限公司 filed Critical 深圳Tcl数字技术有限公司
Priority to JP2023516779A priority Critical patent/JP2023541636A/ja
Priority to GB2305357.2A priority patent/GB2616133A/en
Publication of WO2022052864A1 publication Critical patent/WO2022052864A1/zh
Priority to US18/121,180 priority patent/US20230291601A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2816Controlling appliance services of a home automation network by calling their functionalities
    • H04L12/282Controlling appliance services of a home automation network by calling their functionalities based on user interaction within the home
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/2803Home automation networks
    • H04L12/2805Home Audio Video Interoperability [HAVI] networks
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Definitions

  • the present application relates to the technical field of voice interaction, and in particular, to a scene switching method, terminal and storage medium.
  • far-field devices have been widely used. There may be multiple far-field devices in some users' homes.
  • the far-field device When a user wakes up a far-field device through far-field voice, the far-field device It will automatically run according to the parameters preset by the user or the default parameters. For example: when the user is currently in the living room, and the wake-up word "small T small T" is Query, the far-field device in the living room will receive the wake-up word. On; when the user enters the room from the living room, Query the wake-up word "small T small T" again, and the far-field device in the room will be turned on.
  • the far-field devices that are turned on in the new scenario still operate according to the parameters preset by the user or the default parameters.
  • the far-field devices that are turned on in the new scenario still operate according to the parameters preset by the user or the default parameters.
  • the embodiments of the present application aim to solve the problem that when the user moves from the first scene to the second scene, the far-field device that is turned on in the second scene is still preset according to the user's preset. parameters or default parameters to run the problem.
  • the present application provides a scene switching method on the one hand, and the scene switching method includes the following steps:
  • each target device is matched with each device to be switched.
  • another aspect of the present application further provides a terminal, the terminal includes a memory, a processor, and a program for switching scenes stored in the memory and running on the processor, the processor executing all When describing the switching procedure of the scene, it is used to:
  • each target device is matched with each device to be switched.
  • another aspect of the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, it is used to realize:
  • each target device is matched with each device to be switched.
  • the voiceprint information of the user in the second scene is collected; according to the voiceprint information, each device to be switched in the first scene is determined, and all devices to be switched are obtained.
  • the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
  • the operating parameters corresponding to the information Therefore, when the user moves from the first scene to the second scene, the far-field device that is turned on in the second scene still operates according to the parameters preset by the user in the first scene or the default parameters.
  • the cooperative relationship between the devices realizes the switching of the operating parameters of the devices between different scenarios.
  • FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in a solution according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of a first embodiment of a switching method in a scenario of the application
  • FIG. 3 is a schematic flowchart of a second embodiment of a switching method in a scenario of the application
  • FIG. 4 is a schematic flowchart of a third embodiment of a switching method in a scenario of the application.
  • FIG. 5 is a schematic flowchart of determining whether a user has moved from a first scene to a second scene in the switching method of the application scene;
  • FIG. 6 is a schematic flowchart of collecting the voiceprint information of the user in the second scene if it is the case in the switching method of the application scenario;
  • FIG. 7 is a schematic flowchart of determining each device to be switched in the first scenario according to the voiceprint information in the switching method of the scenario of the application;
  • FIG. 8 is a method for switching a scene of the application, when the voiceprint information in the first scene matches the voiceprint information in the second scene, the matching with the target device is obtained from each activated device Schematic flow chart after the step of determining the turned-on device that matches the target device as the to-be-switched device in the first scene;
  • FIG. 9 is a schematic flowchart of determining the operating state information corresponding to each target device in the second scenario according to the operating state information corresponding to each device to be switched in the switching method in the scenario of the application;
  • FIG. 10 is the switching method in the scenario of the application, in which the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the corresponding operating status information is configured for each target device.
  • the main solutions of the embodiments of the present application are: determine whether the user moves from the first scene to the second scene; if so, collect the voiceprint information of the user in the second scene; The voiceprint information determines each device to be switched in the first scene, and obtains operating status information corresponding to each device to be switched; and determines the second scenario according to the operating status information corresponding to each device to be switched The respective operating status information corresponding to each target device in the above; wherein, each target device matches the each to-be-switched device.
  • the far-field devices that are turned on in the new scene still operate according to the parameters preset by the user or the default parameters.
  • the voiceprint information of the user in the second scene is collected; each device to be switched in the first scene is determined according to the voiceprint information, and all The operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
  • the operating parameters corresponding to the information are realized.
  • FIG. 1 is a schematic structural diagram of a terminal of a hardware operating environment involved in the solution of an embodiment of the present application.
  • the terminal may include: a processor 1001 , such as a CPU, a network interface 1004 , a user interface 1003 , a memory 1005 , and a communication bus 1002 .
  • the communication bus 1002 is used to realize the connection and communication between these components.
  • the user interface 1003 may include a display screen (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may include a standard wired interface and a wireless interface (eg, a WI-FI interface).
  • the memory 1005 may be high-speed RAM memory, or may be non-volatile memory, such as disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001 .
  • the terminal may further include a camera, an RF (Radio Frequency, radio frequency) circuit, a sensor, a remote control, an audio circuit, a WiFi module, a detector, and the like.
  • the terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a temperature sensor, etc., which will not be repeated here.
  • the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal device, and may include more or less components than the one shown, or combine some components, or arrange different components.
  • the memory 1005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a scene switching program.
  • the network interface 1004 is mainly used to connect to the background server and perform data communication with the background server;
  • the user interface 1003 is mainly used to connect to the client (client) and perform data communication with the client;
  • the processor 1001 can be used to invoke the switching program of the scene stored in memory 1005 and perform the following operations:
  • each target device is matched with each device to be switched.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for switching a scenario of the present application.
  • the method for switching a scenario includes the following steps:
  • Step S10 determining whether the user moves from the first scene to the second scene
  • the multiple far-field devices constitute a wireless voice interaction system.
  • the user selects a far-field device as the master device in the wireless voice interaction system, and uses other far-field devices as slave devices, and the master device and the slave devices are connected wirelessly, or by setting Wireless hotspot connection in the environment.
  • the scenes include but are not limited to living rooms, rooms (bedrooms), kitchens, etc.; the first scene refers to the scene where the user is before entering the current scene; the second scene refers to the current scene where the user is located scene.
  • the step of determining whether the user moves from the first scene to the second scene includes:
  • Step S11 obtaining the position information corresponding to each of the activated devices in the first scene and the position information corresponding to each of the target devices in the second scene;
  • Step S12 judging whether the location information corresponding to each enabled device is the same as the location information corresponding to each target device.
  • the far-field device in each scenario has a built-in positioning module.
  • the main control device obtains the network identification parameters of the far-field device through the network connected to the far-field device, and based on the far-field device
  • the satellite positioning module of the device obtains the satellite positioning information of the far-field device; further, according to the network identification parameter and in combination with the satellite positioning information, obtains the position information of the far-field device, and the user can be determined according to the position information the current scene.
  • the same method is used to obtain the position information corresponding to each activated device in the first scene and the position information corresponding to each target device in the second scene, and further determine the position information corresponding to each activated device and each target device. Whether the location information is the same, it is determined whether the scene where the user is located has changed, that is, whether the user has moved from the first scene to the second scene.
  • the target device is a wake-up device in the second scenario, and the wake-up device refers to a device that has not acquired running parameters.
  • Step S20 if yes, collect the voiceprint information of the user in the second scene
  • the step of collecting the voiceprint information of the user in the second scene includes:
  • Step S21 if the position information corresponding to each activated device is different from the position information corresponding to each target device, it is determined that the user is transferred from the first scene to the second scene;
  • Step S22 detecting whether a scene switching instruction is received
  • Step S23 if the scene switching instruction is received, collect the voiceprint information of the user in the second scene.
  • the slave device detects in real time whether a scene switching instruction sent by the user through far-field voice is received, and if a scene switching instruction sent by the user is received, the user's voiceprint information in the second scene is further obtained.
  • the built-in voice detection unit of the slave device detects the voice signal within the working range of the slave device in real time.
  • the voice information of the user corresponding to the scene switching instruction is obtained, and voiceprint recognition is performed on the voice information, so as to extract the characteristic information of the voiceprint, based on the The feature information of the voiceprint obtains the user's voiceprint information.
  • the remote device in the second scene is turned on based on the far-field control instruction sent by the user.
  • the slave device uses the built-in voice detection unit to detect the slave device in real time. The voice signal within the working range of the device is detected.
  • the "small T small T" wake-up word sent by the user through far-field voice is received, multiple far-field devices in this scenario are turned on.
  • the wake-up word is based on the user's needs. It is preset and not limited here. Once the slave device detects the wake-up word, it will perform voice wake-up processing to wake up the algorithm unit in the standby state.
  • the voice detection unit After the algorithm unit is awakened, that is, after the standby state is converted from the standby state to the active state, the voice detection unit will acquire the voice signal. correspondingly, the algorithm unit can perform arithmetic processing on the acquired voice signal according to a predetermined method, including echo cancellation, reverberation cancellation, sound source localization, etc., and finally obtain a clear voice signal, which is transmitted to the Control system for smart devices.
  • the control system of the smart device will upload the acquired voice signal to the server/cloud, so that the server/cloud can perform voice recognition on the acquired voice signal, and generate a corresponding opening command according to the voice recognition result, and return it to the smart device.
  • the control system, the control system of the smart device turns on the far-field device in the scene according to the turn-on instruction.
  • the voice detection unit may be a low-power microphone unit with a wake-up function.
  • Low power consumption means that the power consumption of the microphone unit is very low. By using such a microphone, energy consumption can be saved.
  • the microphone unit may be a microphone array including at least two microphones, and the use of a plurality of microphones can improve the collection sensitivity of the microphone unit for voice signals.
  • a microphone can be set at the left, center, and right positions below the far-field device, so that the user can better capture the Voice signal from the user.
  • the algorithm unit in the standby state can be woken up. For example, when any microphone detects a wake-up word, a wake-up signal (interrupt signal) can be sent to the algorithm unit, thereby activating the algorithm unit to perform operation functions such as echo cancellation, reverberation cancellation, and sound source localization.
  • Step S30 Determine each device to be switched in the first scene according to the voiceprint information, and obtain operating status information corresponding to each device to be switched;
  • the operating state information includes an on state, an off state, a recovery state, etc., wherein, based on different devices, the operating state information is also different, for example, the operating state information of an air conditioner includes a cooling operation state, a heating operation Status, dehumidification running status, and defrosting running status, etc.; the running status information of the fan includes the natural wind running mode, the running status of the wind speed gear, whether it is timed or not.
  • the master control device obtains each enabled device in the first scene according to the voiceprint information, and further obtains the operating status information corresponding to each enabled device in the first scene from the storage module, for example: the master control device receives the enabled device after receiving When the acquisition instruction of the running state information is obtained, wherein, the acquisition instruction is triggered by the main control device after receiving the voiceprint information of the user in the second scene; based on the acquisition instruction, each item in the first scene is acquired from the storage module.
  • the state information corresponding to the devices that are turned on, the state information is that the slave device collects data from each of the far-field devices that are turned on through the data collection module, and sends the collected state information to the master control device for storage.
  • the state information of each enabled device in the first scene is matched with the device information corresponding to each target device in the second scene, and each device to be switched in the first scene is determined based on the matching operation, and each device to be switched is obtained. Corresponding operating status information. Therefore, referring to FIG. 7 , the step of determining each device to be switched in the first scene according to the voiceprint information in the second scene includes:
  • Step S31 acquiring device information corresponding to each enabled device in the first scenario and device information corresponding to each of the target devices in the second scenario;
  • Step S32 matching the voiceprint information in the first scene with the voiceprint information in the second scene, and matching the device information corresponding to each activated device and the device corresponding to each target device respectively information to match;
  • Step S33 when the voiceprint information in the first scene matches the voiceprint information in the second scene, obtain the activated device that matches the target device in each activated device, and determining the enabled device that matches the target device as the to-be-switched device in the first scenario.
  • the master control device When acquiring the device information corresponding to each enabled device in the first scene and the device information corresponding to each of the target devices in the second scene, the master control device first compares the voiceprint information in the first scene with the second scene. If the voiceprint information in the first scene does not match the voiceprint information in the second scene, it means that the user who turned on the far-field device in the first scene and the far-field device in the second scene wake up The users in the first scene are not the same person, so it is necessary to send voice prompt information to remind the user that the current voiceprint information does not match; if the voiceprint information in the first scene matches the voiceprint information in the second scene, the main control device will further The device information corresponding to each enabled device is matched with the device information corresponding to each target device, wherein the device information includes device type, device capability, device usage time, etc.; if the device information corresponding to each enabled device is If the device information corresponding to each target device does not match, a voice prompt message is sent to prompt the user to confirm the current scene that needs
  • the enabled device that matches the target device is obtained from each enabled device, and the enabled device that matches the target device is obtained.
  • the device that is turned on is determined to be the device to be switched in the first scenario. Among them, when matching, the device that is turned on in the first scene and the device that is awakened in the second scene are matched; If there are air conditioners, lamps, and speakers in the wake-up device, the air conditioners, lamps, and speakers in the first scene and the second scene are matched.
  • the storage module of the main control device stores the operating status information corresponding to each enabled device in the first scenario, as shown in Table 1:
  • Table 1 only lists the storage of some equipment information, wherein the equipment information also includes other operating states and operating parameters, which will not be listed one by one here.
  • the corresponding operating status and operating parameter information of each device to be switched in the first scenario can be obtained.
  • the air conditioner is in the cooling mode, the operating cooling temperature is 26°C, the wind speed gear is mid-range, and the dehumidification mode is turned on;
  • the current brightness level of the lamp is mid-range, the light mode is soft light mode;
  • the volume of the audio is adjusted to 60%, and the playback mode is Bluetooth playback mode.
  • the voiceprint information in the first scene is described.
  • the method further includes:
  • Step S320 if the device to be switched in the first scene has a video playback device, acquire the playback content and playback progress of the video playback device;
  • Step S321 sending the playback content and the playback progress to the video playback device in the second scene, so that the video playback device in the second scene displays the playback content according to the playback progress .
  • the main control device detects that there is a video playback device in the device to be switched in the first scene, such as a TV, it obtains the current content of the TV and the progress information of the playback, and when performing the scene switching operation, the previously played content and The playing progress information is sent to the TV in the second scene, so that the TV in the second scene displays the playing content according to the playing progress.
  • the TV in the first scene broadcasts the content of receiving the award after the Chinese women's volleyball team of the CCTV-5 sports channel won the championship, and the broadcast progress is the second minute of the award acceptance.
  • the TV in the second scene The content of the Chinese women's volleyball team receiving the award after winning the championship on the CCTV-5 sports channel is also broadcast, and it starts to be broadcast in the second minute of the award receiving process.
  • Step S40 according to the operating state information corresponding to each device to be switched, determine the operating state information corresponding to each target device in the second scenario; wherein each target device matches each device to be switched .
  • the operating state information corresponding to each target device in the second scenario is determined according to the operating state information corresponding to each device to be switched in the first scenario; wherein, each target device matches each device to be switched means that the target device matches the The device types, device capabilities, and device usage time of the devices to be switched all match.
  • the main control device determines the operating parameters corresponding to each target device according to the operating parameters in the operating status information corresponding to each device to be switched in the first scenario.
  • Operating state information corresponding to each device to be switched, respectively, and the step of determining the operating state information corresponding to each target device in the second scenario includes:
  • Step S41 Send the operating state information corresponding to each device to be switched to each target device in the second scenario, and configure the operating parameters corresponding to the operating state information for each target device.
  • the main control device obtains its corresponding operating parameters according to the operating state information corresponding to each device to be switched in the first scenario, where the operating parameters include operating condition parameters and operating state parameters. Some or all of temperature, indoor temperature and outdoor temperature, and the operating state parameters include some or all of exhaust temperature, operating current, exhaust pressure, evaporating temperature and condensing temperature.
  • the master control device distributes the acquired operating parameters corresponding to each device to be switched to each corresponding awakened device in the second scenario, and each awakened target device in the second scenario receives the operating parameters according to the The run parameter sets the current run.
  • the operating cooling temperature of the air conditioner to be switched is currently 26°C
  • the wind speed is mid-range
  • the sweeping mode is up and down sweeping
  • the lamps to be switched are warm light mode
  • the brightness gear is mid-range
  • the fan to be switched is
  • the wind speed is the third gear
  • the swing mode is left and right swing.
  • the air conditioner in the second scenario operates at a cooling temperature of 26°C, a mid-range wind speed, and a sweeping wind.
  • the way is to sweep the wind up and down; the lamp tube is in the warm light mode, and the brightness gear is in the middle range; the fan speed is in the third gear, and the swing mode is left and right swing.
  • the operation state information corresponding to each device to be switched is sent to each target device in the second scenario, and the corresponding operation state information is configured for each target device.
  • the parameter steps including:
  • Step S42 Receive the result information fed back by the respective target devices in the second scenario, and determine whether the respective operating status information corresponding to the devices to be switched in the first scenario is successfully switched to the each of the target devices in the second scenario;
  • Step S43 if the respective operating status information corresponding to each device to be switched is successfully switched to each target device in the second scenario, send a control instruction to close each device to be switched in the first scenario;
  • Step S44 if the respective operating status information corresponding to each device to be switched is not successfully switched to each target device in the second scenario, then repeatedly sending the operating status information corresponding to each device to be switched to the target device. Steps of each target device in the second scenario.
  • each awakened device in the second scene will send the result information of the operation parameter switching to the main control device, and the result information includes the operation status information, operation parameter information, etc. of each awakened device;
  • the main control device determines whether the respective operating status information corresponding to each device to be switched in the first scenario is successfully switched to each target device in the second scenario, for example: judging that each device to be switched in the first scenario corresponds to Whether the operating status information corresponding to each target device in the second scenario is the same as the operating status information corresponding to each target device in the second scenario, if they are the same, it means that the operating status information corresponding to each device to be switched is successfully switched to each target device in the second scenario.
  • the main control device sends a control command to close each device to be switched in the first scene; if it is not the same, it means that the operating status information corresponding to each device to be switched has not been successfully switched to each target device in the second scene, then repeat the operation of each device to be switched.
  • the construction conditions of the scene need to be determined, and the construction of the scene includes the following conditions:
  • the master control device includes a storage module and a matching module.
  • the storage module is used to store the data sent by the slave device.
  • the matching module is used to match the voiceprint information and device information in the first scene and the second scene; the slave devices include smart air conditioners, smart TVs, smart fans, and smart audio equipment;
  • the UDP (User Datagram Protocol) is a user datagram protocol, a connectionless transport layer protocol in the OSI reference model, providing a simple and reliable transaction-oriented information transmission service;
  • the voiceprint information of the user in the second scene is collected; according to the voiceprint information, each device to be switched in the first scene is determined, and all devices to be switched are obtained.
  • the operating status information corresponding to each device to be switched is sent to each target device in the second scenario, and the operating status is configured for each target device.
  • the operating parameters corresponding to the information By establishing a cooperative relationship between far-field devices, the operating parameters of far-field devices can be switched between different user scenarios. In addition, by sending far-field voices, the far-field devices are automatically turned on, reducing the user's operational complexity and realizing far-field The intelligent perception and intelligent collaboration of devices bring users a more comfortable and convenient home environment.
  • FIG. 3 a second embodiment of the switching method for the scenario of the present application is proposed.
  • the difference between the second embodiment of the scene switching method and the first embodiment of the scene switching method is that before the step of determining whether the user moves from the first scene to the second scene, the method includes:
  • Step S13 acquiring the device startup instruction in the first scene, and acquiring the voiceprint information in the first scene according to the device startup instruction;
  • Step S14 associating the voiceprint information in the first scene with the first scene.
  • the slave device When the slave device receives the wake-up word "small T small T" sent by the user through the far-field voice, wake up multiple far-field devices in the first scene according to the wake-up word, and configure and run the wake-up far-field devices respectively. parameters to turn on each wake-up far-field device, and further the master control device obtains information such as device capability and device status corresponding to each turned-on device; at the same time, preprocesses the obtained user voice to remove non-voice signals and silence.
  • MFCC Mel frequency cepstral coefficient
  • pre-enhancement that is, differential voice signal
  • Framing Framing the speech data
  • Hamming window adding a window to the signal of each frame to reduce the influence of the Gibbs effect
  • Fast Fourier Transform Transforming the time domain signal into the power spectrum of the signal
  • Triangular bandpass filter The range covered by the triangular filter is similar to a critical bandwidth of the human ear, in order to simulate the masking effect of the human ear
  • discrete cosine transform remove the correlation between the various dimensional signals and map the signal to a low-dimensional space.
  • the speech dynamic characteristic parameters are obtained from the extracted MFCC parameters as the user's voiceprint feature information, so as to obtain the user's voiceprint information in the first scene.
  • the slave device associates the acquired voiceprint information in the first scene with the first scene. For example, if the scene where the current user is located is the living room, then bind the currently acquired voiceprint information in the living room to the living room scene. This makes it possible to know that the user's previous scene is the living room based on the voiceprint information in the first scene when the user enters another scene.
  • the voiceprint information can also be associated with the user's personal information. For example, for each family member in the family, the user information and voiceprint feature information of each family member, such as grandpa, grandma, father, mother, and children, are collected respectively. Associating user information with user voiceprint feature information, for example, associating dad's user information with dad's user voiceprint feature information. Further, the master device obtains the voiceprint information sent by the slave device and the information bound to the scene (such as the living room scene + voiceprint object) and the device status information in the scene, and stores the obtained information in the corresponding storage unit.
  • FIG. 4 a third embodiment of the switching method of the scenario of the present application is proposed.
  • the difference between the third embodiment of the scene switching method and the first embodiment and the second embodiment of the scene switching method is that if the scene switching instruction is received, the user is collected in the second scene.
  • the steps in the voiceprint information include:
  • Step S230 if multiple scene switching instructions are received, acquire voiceprint information corresponding to each of the scene switching instructions;
  • Step S231 respectively matching the voiceprint information corresponding to each of the scene switching instructions with each target voiceprint information, and acquiring the voiceprint information matching the target voiceprint information;
  • Step S232 if the voiceprint information corresponding to each of the scene switching instructions corresponds to the voiceprint information that matches the target voiceprint information, determine the corresponding voiceprint information that matches the target information.
  • the scene switching instruction is a target scene switching instruction, and the user corresponding to the target scene switching instruction is the target user;
  • Step S233 Collect the scene switching instruction of the target user and use it as the voiceprint information of the user in the second scene.
  • the slave device sends multiple scene switching instructions to the master device, and the master device instructs the corresponding voice information from each scene.
  • the voiceprint information of the user is extracted from the system, and the main control device matches the extracted voiceprint information with each target voiceprint information in turn; the voiceprint information, then determine that the scene switching instruction corresponding to the voiceprint information matching the target information is the target scene switching instruction, and the user corresponding to the target scene switching instruction is the target user; collect the scene switching instruction of the target user and use it as the target user.
  • Voiceprint information of the user in the second scene is
  • a registered voiceprint library can also be pre-built, and different users can register their own voices in advance.
  • registered users register voices on the setting interface of the smart device, and send out voices within the range that the smart device can collect voices.
  • Voice After the smart device collects the voice of the registered user, it uses the voiceprint model to extract the registered voiceprint feature information according to the registered user's voice, and stores the registered user's registered voiceprint feature information in the registered voiceprint database.
  • the voiceprint model is pre-built, and the parameters of the extracted voiceprint feature information are the same for the voices of different users.
  • the voice issued by the user can be any sentence or specified words. The content is set by the user.
  • the voiceprint feature information of the target user can be quickly obtained, and at the same time, it is also possible to query whether the received voiceprint information of multiple users is stored in the voiceprint database in advance.
  • the texture database the corresponding voiceprint feature information is directly obtained, and the comparison operation with the target voiceprint feature information is performed, thereby quickly determining the target user and shortening the time of the matching operation.
  • voiceprint information corresponding to each scene switching instruction is acquired, and the voiceprint information corresponding to each scene switching instruction is matched with each target voiceprint information respectively.
  • the target user corresponding to the target scene switching instruction is determined, so that the voiceprint information corresponding to the target user can be collected in time.
  • the present application also provides a terminal, the terminal includes a memory, a processor, and a program for switching scenes stored in the memory and running on the processor, when the terminal receives a device startup instruction sent by a user through far-field voice , turn on multiple far-field devices in the home based on the turn-on instruction, and determine the current scene of the user according to the location information corresponding to each turned-on far-field device, and further obtain the user's voiceprint information in the scene and the corresponding corresponding to each turned-on device.
  • device information (such as device operating parameters, device type, device capabilities, etc.); when the user transfers from the first scene to the second scene, if a scene switching instruction sent by the user through far-field voice is received, the second scene is obtained.
  • This embodiment realizes switching of user scenarios by sending far-field voices, reduces the operation complexity of users, realizes intelligent perception and intelligent collaboration of far-field devices, and brings users a more comfortable and convenient home environment.
  • the present application also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the switching method of the above scenario are implemented.
  • the embodiments of the present application may be provided as a method, a system, or a computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
  • computer-usable storage media including, but not limited to, disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions may also be stored in a computer-readable memory capable of directing a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory result in an article of manufacture comprising instruction means, the instructions
  • An apparatus implements the functions specified in a flow or flows of the flowcharts and/or a block or blocks of the block diagrams.
  • These computer program instructions can also be loaded on a computer or other programmable data processing device to cause a series of operational steps to be performed on the computer or other programmable device to produce a computer-implemented process such that The instructions provide steps for implementing the functions specified in one or more of the flowcharts and/or one or more blocks of the block diagrams.
  • any reference signs placed between parentheses shall not be construed as limiting the claim.
  • the word “comprising” does not exclude the presence of elements or steps not listed in a claim.
  • the word “a” or “an” preceding an element does not preclude the presence of a plurality of such elements.
  • the present application may be implemented by means of hardware comprising several different components and by means of a suitably programmed computer. In a unit claim enumerating several means, several of these means may be embodied by one and the same item of hardware.
  • the use of the words first, second, and third, etc. do not denote any order. These words can be interpreted as names.

Abstract

本申请公开了一种场景的切换方法、终端和存储介质,方法包括:确定用户是否从第一场景移动至第二场景;若是,则采集用户在第二场景中的声纹信息;根据声纹信息确定第一场景中各个待切换设备,并获取其分别对应的运行状态;确定第二场景中各个目标设备分别对应的运行状态。实现了设备的运行参数在不同场景间的切换。

Description

场景的切换方法、终端和存储介质
本申请要求于2020年09月14日提交中国专利局、申请号为202010965434.0、发明名称为“场景的切换方法、终端和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及语音交互技术领域,尤其涉及一种场景的切换方法、终端和存储介质。
背景技术
随着远场语音识别技术的发展,远场设备得到了广泛的使用,在一些用户的家中可能会有多个远场设备,当用户通过远场语音唤醒远场设备时,所述远场设备会自动根据用户预先设定的参数或者默认的参数运行,例如:用户当前所在的场景为客厅时,通过Query唤醒词“小T小T”,客厅中的远场设备接收到唤醒词后便会开启;当用户从客厅进入房间时,再次Query唤醒词“小T小T”,房间中的远场设备便会开启。但由于家中不同场景的远场设备间没有建立协作关系,使得在用户进入到一个新场景时,新场景中被开启的远场设备仍然按照用户预先设定的参数或者默认的参数运行。
技术问题
但由于家中不同场景的远场设备间没有建立协作关系,使得在用户进入到一个新场景时,新场景中被开启的远场设备仍然按照用户预先设定的参数或者默认的参数运行。
技术解决方案
本申请实施例通过提供一种场景的切换方法、终端和存储介质,旨在解决当用户从第一场景移动至第二场景时,第二场景下被开启的远场设备仍然按照用户预先设定的参数或者默认的参数运行的问题。
为实现上述目的,本申请一方面提供一种场景的切换方法,所述场景的切换方法包括以下步骤:
确定用户是否从第一场景移动至第二场景;
若是,则采集所述用户在所述第二场景中的声纹信息;
根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
其中,所述各个目标设备与所述各个待切换设备相匹配。
此外,为实现上述目的,本申请另一方面还提供一种终端,所述终端包括存储器、处理器及存储在存储器上并在处理器上运行的场景的切换的程序,所述处理器执行所述场景的切换程序时用于:
确定用户是否从第一场景移动至第二场景;
若是,则采集所述用户在所述第二场景中的声纹信息;
根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
其中,所述各个目标设备与所述各个待切换设备相匹配。
此外,为实现上述目的,本申请另一方面还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时用于实现:
确定用户是否从第一场景移动至第二场景;
若是,则采集所述用户在所述第二场景中的声纹信息;
根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
其中,所述各个目标设备与所述各个待切换设备相匹配。
有益效果
本实施例在确定用户从第一场景移动至第二场景时,采集用户在第二场景中的声纹信息;根据所述声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。由此,当用户从第一场景移动至第二场景时,该第二场景下被开启的远场设备仍然按照用户在第一场景中预先设定的参数或者默认的参数进行运行,通过远场设备间的协作关系,实现了设备的运行参数在不同场景间的切换。
附图说明
图1为本申请实施例方案涉及的硬件运行环境的终端结构示意图;
图2为本申请场景的切换方法第一实施例的流程示意图;
图3为本申请场景的切换方法第二实施例的流程示意图;
图4为本申请场景的切换方法第三实施例的流程示意图;
图5为本申请场景的切换方法中确定用户是否从第一场景移动至第二场景的流程示意图;
图6为本申请场景的切换方法中若是,则采集所述用户在所述第二场景中的声纹信息的流程示意图;
图7为本申请场景的切换方法中根据所述声纹信息确定所述第一场景中各个待切换设备的流程示意图;
图8为本申请场景的切换方法中在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备的步骤之后的流程示意图;
图9为本申请场景的切换方法中根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息的流程示意图;
图10为本申请场景的切换方法中将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数的步骤之后的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
本发明的实施方式
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。
本申请实施例的主要解决方案是:确定用户是否从第一场景移动至第二场景;若是,则采集所述用户在所述第二场景中的声纹信息;根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;其中,所述各个目标设备与所述各个待切换设备相匹配。
由于家中不同场景的远场设备间没有建立协作关系,使得在用户进入到一个新场景时,新场景中被开启的远场设备仍然按照用户预先设定的参数或者默认的参数运行。而本申请在确定用户从第一场景移动至第二场景时,采集用户在第二场景中的声纹信息;根据所述声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。通过远场设备间的协作关系,实现了远场设备的运行参数在不同用户场景间的切换。
如图1所示,图1为本申请实施例方案涉及的硬件运行环境的终端结构示意图。
如图1所示,该终端可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。
可选地,终端还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、遥控器、音频电路、WiFi模块、检测器等等。当然,所述终端还可配置陀螺仪、气压计、湿度计、温度传感器等其他传感器,在此不再赘述。
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端设备的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机可读存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及场景的切换程序。
在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的场景的切换程序,并执行以下操作:
确定用户是否从第一场景移动至第二场景;
若是,则采集所述用户在所述第二场景中的声纹信息;
根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
其中,所述各个目标设备与所述各个待切换设备相匹配。
参考图2,图2为本申请场景的切换方法第一实施例的流程示意图,所述场景的切换方法包括以下步骤:
步骤S10,确定用户是否从第一场景移动至第二场景;
随着人们生活品质的提高,越来越多的用户喜欢在家里放置多个远场设备如音箱、空调、电视等,所述多个远场设备构成一个无线语音交互系统。通常,用户会在所述无线语音交互系统中选定一个远场设备作为主控设备,而将其他远场设备作为从设备,所述主控设备与从设备通过无线的方式连接,或者通过设置于环境中的无线热点连接。
在本实施例中,所述场景包括但不限于客厅、房间(卧室)、厨房等;所述第一场景是指用户在进入当前场景前所在的场景;所述第二场景是指用户当前所在的场景。
在执行场景的切换操作之前,需要先确定用户当前所在的场景是否发生变化,例如:在不同的场景设置多个摄像头,所述摄像头用于拍摄用户所在场景的图片/视频,进一步从拍摄的图片/视频中获取该场景下的设备标识,如客厅中空调的标识为01,卧室中空调的标识为02,所述设备标识可根据用户的需求进行设置,在此不做限定。基于所述设备标识确定用户所在的场景是否发生变化,其中,所述设备标识预先与对应的场景信息进行关联存储。又或者是基于不同场景的定位信息确定用户是否从第一场景移动至第二场景,在一实施例中,参考图5,所述确定用户是否从第一场景移动至第二场景的步骤包括:
步骤S11,获取所述第一场景中所述各个被开启设备分别对应的位置信息以及所述第二场景中所述各个目标设备分别对应的位置信息;
步骤S12,判断所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息是否相同。
每个场景中的远场设备都内置有定位模块,当所述远场设备被开启时,主控设备通过远场设备连接的网络获取所述远场设备的网络识别参数,基于所述远场设备的卫星定位模块获取所述远场设备的卫星定位信息;进一步根据所述网络识别参数,并结合所述卫星定位信息,获得所述远场设备的位置信息,根据所述位置信息可以确定用户当前所在的场景。采用同样的方法获取第一场景中各个被开启设备分别对应的位置信息以及第二场景中各个目标设备分别对应的位置信息,进一步判断各个被开启设备分别对应的位置信息与各个目标设备分别对应的位置信息是否相同,从而判断用户所在的场景是否发生变化,即确定用户是否从第一场景移动至第二场景。
需要说明的是,所述目标设备为第二场景中的被唤醒设备,所述被唤醒设备是指未获取到运行参数的设备。
步骤S20,若是,则采集所述用户在所述第二场景中的声纹信息;
当确定用户从第一场景移动至第二场景时,如用户从客厅转移至卧室时,实时检测是否接收到用户的语音信息,若接收到用户的语音信息,则基于语音信息采集用户在卧室中的声纹信息。在一实施例中,参考图6,所述若是,则采集所述用户在所述第二场景中的声纹信息的步骤包括:
步骤S21,若所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息不相同,则确定用户从所述第一场景转移至所述第二场景;
步骤S22,检测是否接收到场景切换指令;
步骤S23,若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息。
若第一场景中各个被开启设备分别对应的位置信息与第二场景中各个目标设备分别对应的位置信息不相同,则说明用户从第一场景转移至第二场景;若第一场景中各个被开启设备分别对应的位置信息与第二场景中各个目标设备分别对应的位置信息相同,则说明用户当前所在的场景没有发生变化。当用户进入第二场景后,从设备实时检测是否接收到用户通过远场语音发送的场景切换指令,若接收到用户发送的场景切换指令,则进一步获取第二场景中用户的声纹信息。具体地,在第二场景的远场设备开启后,从设备内置的语音检测单元实时地对从设备工作范围内的语音信号进行检测,当接收到用户通过远场语音发送的“场景切换”的切换指令时,确定用户需要从第一场景切换至第二场景,获取场景切换指令对应的用户的语音信息,对所述语音信息进行声纹识别,从而提取出声纹的特征信息,基于所述声纹的特征信息获取用户的声纹信息。
进一步地,第二场景中的远程设备是基于用户发送的远场控制指令开启的,具体地,当用户从第一场景转移至第二场景时,从设备通过内置的语音检测单元实时地对从设备工作范围内的语音信号进行检测,当接收到用户通过远场语音发送的“小T小T”唤醒词时,该场景下的多个远场设备被开启,所述唤醒词为用户根据需求预先设定,在此不做限定。从设备一旦检测到唤醒词,则进行语音唤醒处理,唤醒处于待机状态的算法单元,所述算法单元被唤醒后,即从待机状态转换为激活状态后,语音检测单元会将获取到的语音信号传输给算法单元;相应地,所述算法单元可按照预定方式对获取到的语音信号进行运算处理,包括进行回声消除、混响消除、声源定位等,最终得到一路清晰的语音信号,传输给智能设备的控制系统。所述智能设备的控制系统会将获取到的语音信号上传到服务器/云端,以便服务器/云端对获取到的语音信号进行语音识别,并根据语音识别结果生成相应的开启指令,返回给智能设备的控制系统,智能设备的控制系统根据开启指令开启该场景中的远场设备。
可选地,所述语音检测单元可为带唤醒功能的低功耗的麦克风单元,低功耗即指麦克风单元的功耗很低,通过使用这种麦克风,可以达到节约能耗的目的。此外,麦克风单元可为至少包含两个麦克风的麦克风阵列,采用多个麦克风,可以提升麦克风单元对于语音信号的采集灵敏度。例如,可在远场设备下方左、中、右三个位置分别设置一个麦克风,这样,用户无论是处于远场设备的正对位置、左侧位置还是右侧位置,均可较好地采集到用户发出的语音信号。当麦克风阵列中的任一麦克风检测到唤醒词时,则可唤醒处于待机状态的算法单元。例如,当任一麦克风检测唤醒词时,可向算法单元发出唤醒信号(中断信号),从而激活算法单元,使其执行回声消除、混响消除、声源定位等运算功能。
步骤S30,根据所述声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
在本实施例中,所述运行状态信息包括开启状态、关闭状态、恢复状态等,其中,基于不同的设备,其运行状态信息也不同,如空调的运行状态信息包括制冷运行状态、制热运行状态、除湿运行状态以及除霜运行状态等;风扇的运行状态信息包括自然风运行模式、风速档位的运行状态、是否定时等运行状态。
主控设备根据声纹信息获取第一场景中各个被开启的设备,进一步从存储模块中获取第一场景中各个被开启设备分别对应的运行状态信息,例如:主控设备在接收到被开启设备的运行状态信息的获取指令时,其中,所述获取指令是主控设备在接收到第二场景中用户的声纹信息后触发的;基于所述获取指令从存储模块中获取第一场景中各个被开启设备分别对应的状态信息,所述状态信息是从设备通过数据采集模块对各个被开启的远场设备进行数据采集,并将采集到的状态信息发送至主控设备进行存储。进一步,将第一场景中各个被开启设备的状态信息与第二场景中各个目标设备分别对应的设备信息进行匹配,基于匹配操作确定第一场景中各个待切换设备,并获取各个待切换设备分别对应的运行状态信息。因此,参考图7,所述根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备的步骤包括:
步骤S31,获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;
步骤S32,将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;
步骤S33,在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
主控设备在获取到第一场景中各个被开启设备分别对应的设备信息以及第二场景中各个所述目标设备分别对应的设备信息时,先将第一场景中的声纹信息与第二场景中的声纹信息进行匹配,若第一场景中的声纹信息与第二场景中的声纹信息不匹配,则说明第一场景中开启远场设备的用户与第二场景中唤醒远场设备的用户不是同一个人,因此需要发送语音提示信息,以提示用户当前的声纹信息不匹配;若第一场景中的声纹信息与第二场景中的声纹信息匹配,则主控设备进一步将各个被开启设备分别对应的设备信息和各个目标设备分别对应的设备信息进行匹配,其中,所述设备信息包括设备类型、设备能力、设备的使用时间等;若各个被开启设备分别对应的设备信息和各个目标设备分别对应的设备信息不匹配,则发送语音提示信息,以提示用户确认当前需要转换的场景。若各个被开启设备分别对应的设备信息和各个目标设备分别对应的设备信息匹配或存在部分匹配,则在各个被开启设备中获取与目标设备匹配的被开启设备,并将与目标设备匹配的被开启设备确定为第一场景中的所述待切换设备。其中,在进行匹配时,是对第一场景的被开启设备和第二场景中被唤醒设备进行匹配;例如:第一场景中被开启设备有空调、电视、灯管以及音响,而第二场景中被唤醒设备有空调、灯管、音响,则将第一场景和第二场景中的空调、灯管、音响的进行匹配。若所述设备都满足匹配的条件,则确定所述空调、灯管、音响为第一场景中的待切换设备,进一步从主控设备的存储模块中获取各个待切换设备分别对应的运行状态信息。其中,主控设备的存储模块存储有第一场景中各个被开启设备分别对应的运行状态信息,如表1所示:
表1
设备名称 工作状态 运行状态和运行参数
空调 开启 制冷运行(26℃);风速档位(中档);开启除湿
风扇 开启 运行模式(自然风);风速档位(中档);摇摆方式(左右摇摆)
灯管 开启 亮度档位(中档);灯光模式(柔光模式)
音响 开启 音量大小(60%);播放模式(蓝牙播放模式)
表1中仅是列举了部分设备信息的存储,其中,设备信息还包括其他的运行状态和运行参数,在此不再一一列举。
从表1中可以获取到第一场景中各个待切换设备分别对应的运行状态和运行参数信息,如空调处于制冷模式,运行制冷温度为26℃,风速档位为中档,且开启了除湿模式;灯管当前的亮度档位为中档,灯光模式为柔光模式;音响的音量大小调节至60%,播放模式为蓝牙播放模式。
进一步地,当第一场景中的待切换设备存在视频播放设备时,则需要记录视频播放设备的播放内容和播放进度,因此,参考图8,所述在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备的步骤之后,还包括:
步骤S320,若所述第一场景中所述待切换设备存在视频播放设备,则获取所述视频播放设备的播放内容和播放进度;
步骤S321,将所述播放内容和所述播放进度发送至所述第二场景中的视频播放设备,以使所述第二场景中的视频播放设备根据所述播放进度对所述播放内容进行显示。
若主控设备检测到第一场景中所述待切换的设备中存在视频播放设备,如电视,则获取电视当前播放内容以及播放的进度信息,在执行场景的切换操作时,将前播放内容以及播放的进度信息发送至第二场景中的电视,使得第二场景中的电视根据所述播放进度对所述播放内容进行显示。例如:第一场景中的电视在播放CCTV-5体育频道的中国女排夺冠后领奖的内容,且播放进度为领奖进行中的第二分钟,在执行场景切换后,第二场景中的电视也播放CCTV-5体育频道的中国女排夺冠后领奖的内容,且在领奖进行中的第二分钟开始播放。
步骤S40,根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;其中,所述各个目标设备与所述各个待切换设备相匹配。
本实施例根据第一场景中各个待切换设备分别对应的运行状态信息确定第二场景中各个目标设备分别对应的运行状态信息;其中,各个目标设备与个待切换设备相匹配是指目标设备与待切换设备之间的设备类型、设备能力、设备使用时间等都匹配。具体地,主控设备根据第一场景中各个待切换设备分别对应的运行状态信息中的运行参数确定各个目标设备分别对应的运行参数,在一实施例中,参考图9,所述根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息的步骤包括:
步骤S41,将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。
主控设备根据第一场景中各个待切换设备分别对应的运行状态信息获取其对应的运行参数,所述运行参数包括运行条件参数和运行状态参数,例如:空调的运行条件参数包括运行模式、开机温度、室内温度和室外温度中的部分或全部,运行状态参数包括排气温度、工作电流、排气压力、蒸发温度和冷凝温度中的部分或全部。
主控设备将获取到的各个待切换设备分别对应的运行参数分发至第二场景中对应的各个被唤醒设备,第二场景中的各个被唤醒标设备接收到所述运行参数后,根据所述运行参数设置当前的运行。例如:当前获取到第一场景中待切换空调的运行制冷温度为26℃、风速为中档、扫风方式为上下扫风;待切换灯管为暖光模式,亮度档位为中档;待切换风扇的风速为三挡、摇摆方式为左右摇摆。将获取到的待切换空调、待切换灯管以及待切换风扇的运行参数发送至第二场景中的各个目标设备,使得第二场景中空调的运行制冷温度为26℃、风速为中档、扫风方式为上下扫风;灯管为暖光模式,亮度档位为中档;风扇的风速为三挡、摇摆方式为左右摇摆。
进一步地,参考图10,所述将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数的步骤之后,包括:
步骤S42,接收所述第二场景中所述各个目标设备反馈的结果信息,基于所述结果信息判断所述第一场景中所述各个待切换设备分别对应的运行状态信息是否成功切换至所述第二场景中的所述各个目标设备;
步骤S43,若所述各个待切换设备分别对应的运行状态信息成功切换至所述第二场景中的所述各个目标设备,则发送控制指令关闭所述第一场景中所述各个待切换设备;
步骤S44,若所述各个待切换设备分别对应的运行状态信息未成功切换至所述第二场景中的所述各个目标设备,则重复将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备的步骤。
在进行场景的切换操作后,第二场景中的各个被唤醒设备会将运行参数切换的结果信息发送至主控设备,所述结果信息包括各个被唤醒设备的运行状态信息、运行参数信息等;主控设备基于接收到的结果信息判断第一场景中各个待切换设备分别对应的运行状态信息是否成功切换至第二场景中的各个目标设备,例如:判断第一场景中各个待切换设备分别对应的运行状态信息与第二场景中的各个目标设备分别对应的运行状态信息是否相同,若相同,则说明各个待切换设备分别对应的运行状态信息成功切换至第二场景中的各个目标设备,需要由主控设备发送控制指令关闭第一场景中各个待切换设备;若不相同,则说明各个待切换设备分别对应的运行状态信息未成功切换至第二场景中的各个目标设备,则重复将各个待切换设备分别对应的运行状态信息发送至第二场景中所述各个目标设备的步骤。
需要说明的是,在执行场景的切换之前,需要确定场景的搭建条件,所述场景的搭建包括以下几个条件:
一、设置多个远场设备;
二、采用mDNS或upnp等轻量级协议识别远场设备,并且在家庭中定义主控设备和从设备,所述主控设备包括存储模块和匹配模块,所述存储模块用于存储从设备发送的声纹信息、设备信息以及场景信息等,所述匹配模块用于匹配第一场景和第二场景下的声纹信息以及设备信息;所述从设备包括智能空调、智能电视、智能风扇以及智能音响等设备;
三、主控设备与从设备之间建立UDP连接,并通过心跳包对主控设备与从设备之间的连接进行检测,所述UDP(User Datagram Protocol)是用户数据报协议,是OSI 参考模型中一种无连接的传输层协议,提供面向事务的简单可靠信息传送服务;
四、根据多个远场设备被唤醒的设备类型,或者由用户补充,对每个设备进行位置登记及场景划分;
五、设置远场设备的场景切换唤醒语句以及场景切换语句:如“小T小T”,“场景切换”等。
本实施例在确定用户从第一场景移动至第二场景时,采集用户在第二场景中的声纹信息;根据所述声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。通过建立远场设备间的协作关系,实现了远场设备的运行参数在不同用户场景间的切换,此外,通过发送远场语音自动开启远场设备,降低用户的操作复杂度,实现了远场设备的智能感知以及智能协作,给用户带来了更加舒适便捷的家居环境。
进一步地,参考图3,提出本申请场景的切换方法第二实施例。
所述场景的切换方法第二实施例与场景的切换方法第一实施例的区别在于,所述确定用户是否从第一场景移动至第二场景的步骤之前,包括:
步骤S13,获取所述第一场景中的设备开启指令,根据所述设备开启指令获取所述第一场景中的所述声纹信息;
步骤S14,将所述第一场景中的所述声纹信息与所述第一场景相关联。
当从设备接收到用户通过远场语音发送的唤醒词“小T 小T”时,根据所述唤醒词唤醒第一场景下的多个远场设备,为被唤醒的各个远场设备分别配置运行参数,以开启各个被唤醒的远场设备,进一步主控设备获取各个被开启设备分别对应的设备能力、设备状态等信息;同时,对获取到的用户语音进行预处理,去除非语音信号和静默语音信号,获得预处理语音,再对预处理语音进行分帧,提取每一帧语音信号的梅尔频率倒谱系数(MFCC)并保存,具体包括以下步骤:预增强:即差分语音信号;音框化:对语音数据分帧;汉明窗:对每帧信号加窗,以减小吉布斯效应的影响;快速傅立叶变换:将时域信号变换成为信号的功率谱;三角带通滤波器:三角滤波器覆盖的范围都近似于人耳的一个临界带宽,以此来模拟人耳的掩蔽效应;离散余弦转换:去除各维信号之间的相关性,将信号映射到低维空间。进一步从提取的MFCC参数中获取语音动态特性参数作为用户的声纹特征信息,从而获取到用户在第一场景中的声纹信息。
从设备将获取到的第一场景中的声纹信息与第一场景相关联,例如:当前用户所在的场景为客厅,则将当前在客厅获取到的声纹信息与客厅这一场景绑定,这使得当用户进入到别的场景时,基于第一场景中的声纹信息可以知道该用户之前所在的场景为客厅。可选地,还可以将声纹信息与用户的个人信息相关联,例如:对于家庭中各家庭成员,分别采集其用户信息和声纹特征信息,如爷爷、奶奶、爸爸、妈妈以及孩子等,将用户信息和用户声纹特征信息关联,例如:爸爸的用户信息和爸爸的用户声纹特征信息关联。进一步主设备获取从设备发送的声纹信息与场景绑定的信息(如客厅场景+声纹对象)以及该场景下的设备状态信息,将获取到的信息存储至对应的存储单元。
本实施例通过将第一场景中的声纹信息与第一场景相关联,使得在获取到第一场景中的声纹信息时,同时获取声纹信息对应的场景信息。
进一步地,参考图4,提出本申请场景的切换方法第三实施例。
所述场景的切换方法第三实施例与场景的切换方法第一实施例和第二实施例的区别在于,所述若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息的步骤包括:
步骤S230,若接收到多个所述场景切换指令,则获取各个所述场景切换指令分别对应的声纹信息;
步骤S231,将各个所述场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配,获取与所述目标声纹信息匹配的所述声纹信息;
步骤S232,若各个所述场景切换指令分别对应的声纹信息中存在与所述各个目标声纹信息匹配的所述声纹信息,则确定与所述目标信息匹配的所述声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;
步骤S233,采集所述目标用户的所述场景切换指令,并作为所述用户在所述第二场景中的声纹信息。
当在同一时间有多个用户通过远场语音发送“场景切换”的场景切换指令时,从设备将多个场景切换指令发送至主控设备,主控设备从每个场景指令分别对应的语音信息中提取用户的声纹信息,主控设备将提取到的各个声纹信息依次与各个目标声纹信息分别进行匹配;若各个场景切换指令分别对应的声纹信息中存在与各个目标声纹信息匹配的声纹信息,则确定与目标信息匹配的声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;采集目标用户的场景切换指令,并作为用户在第二场景中的声纹信息。
可选地,还可以预先构建一个注册声纹库,不同用户可以预先将自己的语音进行注册,例如:注册用户通过在智能设备的设置界面进行语音注册,在智能设备可采集语音的范围内发出语音,智能设备采集到注册用户的语音后,利用声纹模型根据注册用户的语音提取注册声纹特征信息,并将注册用户的注册声纹特征信息存储到注册声纹库中。其中,所述声纹模型是预先构建的,针对不同用户的语音,提取出的声纹特征信息的参数相同,此外,用户发出的语音可以是任意一句话,也可以是指定的话,语音的具体内容由用户来设定。通过构建所述声纹库,可快速获取目标用户的声纹特征信息,同时,还可以查询所述接收到的多个用户的声纹信息是否提前存储至声纹库中,若提前存储在声纹库中,则直接获取其对应的声纹特征信息,执行与目标声纹特征信息的比对操作,从而快速确定目标用户,缩短了匹配操作的时间。
本实施例在接收到多个用户发送的场景切换指令时,获取每个场景切换指令分别对应的声纹信息,通过将各个场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配以确定目标场景切换指令对应的目标用户,使得可以及时采集目标用户对应的声纹信息。
此外,本申请还提供一种终端,所述终端包括存储器、处理器及存储在存储器上并在处理器上运行的场景的切换的程序,终端在接收用户通过远场语音发送的设备开启指令时,基于开启指令开启家中多个远场设备,并根据各个被开启的远场设备对应的位置信息确定用户当前所在的场景,进一步获取该场景下用户的声纹信息以及各个被开启的设备分别对应的设备信息(如设备的运行参数、设备类型、设备能力等);当用户从第一个场景转移至第二场景时,若接收到用户通过远场语音发送的场景切换指令,则获取第二场景下用户的声纹信息以及各个被唤醒的设备分别对应的设备信息;将第一场景中的声纹信息与第二场景中的声纹信息进行匹配以及将各个被开启设备分别对应的设备信息和各个目标设备分别对应的设备信息进行匹配,以确定第一场景中的各个待切换设备,并将各个待切换设备分别对应的运行状态信息发送至第二场景中各个目标设备,为各个目标设备配置运行状态信息对应的运行参数。本实施例通过发送远场语音实现用户场景的切换,降低用户的操作复杂度,实现远场设备的智能感知以及智能协作,给用户带来更加舒适便捷的家居环境。
此外,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述场景的切换方法的步骤。
本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
应当注意的是,在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的部件或步骤。位于部件之前的单词“一”或“一个”不排除存在多个这样的部件。本申请可以借助于包括有若干不同部件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。
尽管已描述了本申请的可选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括可选实施例以及落入本申请范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (20)

  1. 一种场景的切换方法,其中,所述方法包括:
    确定用户是否从第一场景移动至第二场景;
    若是,则采集所述用户在所述第二场景中的声纹信息;
    根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
    根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
    其中,所述各个目标设备与所述各个待切换设备相匹配。
  2. 根据权利要求1所述的场景的切换方法,其中,所述根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备的步骤包括:
    获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;
    将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;
    在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
  3. 根据权利要求2所述的场景的切换方法,其中,所述根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息的步骤包括:
    将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。
  4. 根据权利要求1所述的场景的切换方法,其中,所述确定用户是否从第一场景移动至第二场景的步骤包括:
    获取所述第一场景中所述各个被开启设备分别对应的位置信息以及所述第二场景中所述各个目标设备分别对应的位置信息;
    判断所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息是否相同。
  5. 根据权利要求1所述的场景的切换方法,其中,所述若是,则采集所述用户在所述第二场景中的声纹信息的步骤包括:
    若所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息不相同,则确定用户从所述第一场景转移至所述第二场景;
    检测是否接收到场景切换指令;
    若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息。
  6. 根据权利要求3所述的场景的切换方法,其中,所述将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数的步骤之后,包括:
    接收所述第二场景中所述各个目标设备反馈的结果信息,基于所述结果信息判断所述第一场景中所述各个待切换设备分别对应的运行状态信息是否成功切换至所述第二场景中的所述各个目标设备;
    若所述各个待切换设备分别对应的运行状态信息成功切换至所述第二场景中的所述各个目标设备,则发送控制指令关闭所述第一场景中所述各个待切换设备;
    若所述各个待切换设备分别对应的运行状态信息未成功切换至所述第二场景中的所述各个目标设备,则重复将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备的步骤。
  7. 根据权利要求1所述的场景的切换方法,其中,所述确定用户是否从第一场景移动至第二场景的步骤之前,包括:
    获取所述第一场景中的设备开启指令,根据所述设备开启指令获取所述第一场景中的所述声纹信息;
    将所述第一场景中的所述声纹信息与所述第一场景相关联。
  8. 根据权利要求5所述的场景的切换方法,其中,所述若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息的步骤包括:
    若接收到多个所述场景切换指令,则获取各个所述场景切换指令分别对应的声纹信息;
    将各个所述场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配,获取与所述目标声纹信息匹配的所述声纹信息;
    若各个所述场景切换指令分别对应的声纹信息中存在与所述各个目标声纹信息匹配的所述声纹信息,则确定与所述目标信息匹配的所述声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;
    采集所述目标用户的所述场景切换指令,并作为所述用户在所述第二场景中的声纹信息。
  9. 根据权利要求2所述的场景的切换方法,其中,所述在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备的步骤之后,还包括:
    若所述第一场景中所述待切换设备存在视频播放设备,则获取所述视频播放设备的播放内容和播放进度;
    将所述播放内容和所述播放进度发送至所述第二场景中的视频播放设备,以使所述第二场景中的视频播放设备根据所述播放进度对所述播放内容进行显示。
  10. 一种终端,其中,包括存储器、处理器及存储在存储器上并在处理器上运行的场景的切换的程序,所述处理器执行所述场景的切换程序时用于:
    确定用户是否从第一场景移动至第二场景;
    若是,则采集所述用户在所述第二场景中的声纹信息;
    根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
    根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
    其中,所述各个目标设备与所述各个待切换设备相匹配。
  11. 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;
    将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;
    在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
  12. 根据权利要求11所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备,为所述各个目标设备配置所述运行状态信息对应的运行参数。
  13. 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    获取所述第一场景中所述各个被开启设备分别对应的位置信息以及所述第二场景中所述各个目标设备分别对应的位置信息;
    判断所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息是否相同。
  14. 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    若所述各个被开启设备分别对应的位置信息与所述各个目标设备分别对应的位置信息不相同,则确定用户从所述第一场景转移至所述第二场景;
    检测是否接收到场景切换指令;
    若接收到所述场景切换指令,则采集所述用户在所述第二场景中的声纹信息。
  15. 根据权利要求12所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    接收所述第二场景中所述各个目标设备反馈的结果信息,基于所述结果信息判断所述第一场景中所述各个待切换设备分别对应的运行状态信息是否成功切换至所述第二场景中的所述各个目标设备;
    若所述各个待切换设备分别对应的运行状态信息成功切换至所述第二场景中的所述各个目标设备,则发送控制指令关闭所述第一场景中所述各个待切换设备;
    若所述各个待切换设备分别对应的运行状态信息未成功切换至所述第二场景中的所述各个目标设备,则重复将所述各个待切换设备分别对应的运行状态信息发送至所述第二场景中所述各个目标设备的步骤。
  16. 根据权利要求10所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    获取所述第一场景中的设备开启指令,根据所述设备开启指令获取所述第一场景中的所述声纹信息;
    将所述第一场景中的所述声纹信息与所述第一场景相关联。
  17. 根据权利要求14所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    若接收到多个所述场景切换指令,则获取各个所述场景切换指令分别对应的声纹信息;
    将各个所述场景切换指令分别对应的声纹信息与各个目标声纹信息分别进行匹配,获取与所述目标声纹信息匹配的所述声纹信息;
    若各个所述场景切换指令分别对应的声纹信息中存在与所述各个目标声纹信息匹配的所述声纹信息,则确定与所述目标信息匹配的所述声纹信息所对应的场景切换指令为目标场景切换指令,所述目标场景切换指令对应的用户为目标用户;
    采集所述目标用户的所述场景切换指令,并作为所述用户在所述第二场景中的声纹信息。
  18. 根据权利要求11所述的终端,其中,所述处理器执行所述场景的切换程序时还用于:
    若所述第一场景中所述待切换设备存在视频播放设备,则获取所述视频播放设备的播放内容和播放进度;
    将所述播放内容和所述播放进度发送至所述第二场景中的视频播放设备,以使所述第二场景中的视频播放设备根据所述播放进度对所述播放内容进行显示。
  19. 一种计算机可读存储介质,其中,其上存储有计算机程序,所述计算机程序被处理器执行时用于实现:
    确定用户是否从第一场景移动至第二场景;
    若是,则采集所述用户在所述第二场景中的声纹信息;
    根据所述第二场景中的声纹信息确定所述第一场景中各个待切换设备,并获取所述各个待切换设备分别对应的运行状态信息;
    根据所述各个待切换设备分别对应的运行状态信息,确定所述第二场景中各个目标设备分别对应的运行状态信息;
    其中,所述各个目标设备与所述各个待切换设备相匹配。
  20. 根据权利要求19所述的计算机可读存储介质,其中,所述计算机程序被处理器执行时还用于实现:
    获取所述第一场景中各个被开启设备分别对应的设备信息以及所述第二场景中各个所述目标设备分别对应的设备信息;
    将所述第一场景中的声纹信息与所述第二场景中的声纹信息进行匹配以及将所述各个被开启设备分别对应的设备信息和所述各个目标设备分别对应的设备信息进行匹配;
    在所述第一场景中的声纹信息与所述第二场景中的声纹信息匹配时,在所述各个被开启设备中获取与所述目标设备匹配的所述被开启设备,并将与所述目标设备匹配的所述被开启设备确定为所述第一场景中的所述待切换设备。
PCT/CN2021/116320 2020-09-14 2021-09-02 场景的切换方法、终端和存储介质 WO2022052864A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2023516779A JP2023541636A (ja) 2020-09-14 2021-09-02 シーンの切り替え方法、端末及び記憶媒体
GB2305357.2A GB2616133A (en) 2020-09-14 2021-09-02 Scene switching method, terminal and storage medium
US18/121,180 US20230291601A1 (en) 2020-09-14 2023-03-14 Scene switching

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010965434.0A CN112104533B (zh) 2020-09-14 2020-09-14 场景的切换方法、终端和存储介质
CN202010965434.0 2020-09-14

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/121,180 Continuation US20230291601A1 (en) 2020-09-14 2023-03-14 Scene switching

Publications (1)

Publication Number Publication Date
WO2022052864A1 true WO2022052864A1 (zh) 2022-03-17

Family

ID=73759014

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/116320 WO2022052864A1 (zh) 2020-09-14 2021-09-02 场景的切换方法、终端和存储介质

Country Status (5)

Country Link
US (1) US20230291601A1 (zh)
JP (1) JP2023541636A (zh)
CN (1) CN112104533B (zh)
GB (1) GB2616133A (zh)
WO (1) WO2022052864A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112104533B (zh) * 2020-09-14 2023-02-17 深圳Tcl数字技术有限公司 场景的切换方法、终端和存储介质
CN112954113B (zh) * 2021-01-15 2022-05-24 北京达佳互联信息技术有限公司 一种场景切换方法、装置、电子设备和存储介质
CN114900505B (zh) * 2022-04-18 2024-01-30 广州市迪士普音响科技有限公司 一种基于web的音频场景定时切换方法、装置及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104601694A (zh) * 2015-01-13 2015-05-06 小米科技有限责任公司 操作控制方法、终端、中继设备、智能设备及装置
CN106878762A (zh) * 2015-12-14 2017-06-20 北京奇虎科技有限公司 一种实现终端设备切换的方法、装置、服务器及系统
CN107205217A (zh) * 2017-06-19 2017-09-26 广州安望信息科技有限公司 基于智能音箱场景组网的无间断内容推送方法及系统
CN110674482A (zh) * 2019-08-13 2020-01-10 武汉攀升鼎承科技有限公司 一种多场景应用计算机
CN111312235A (zh) * 2018-12-11 2020-06-19 阿里巴巴集团控股有限公司 一种语音交互方法、装置及系统
US20200275225A1 (en) * 2014-01-17 2020-08-27 Proctor Consulting, LLC Smart hub
CN112104533A (zh) * 2020-09-14 2020-12-18 深圳Tcl数字技术有限公司 场景的切换方法、终端和存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104616675A (zh) * 2013-11-05 2015-05-13 华为终端有限公司 一种切换播放设备的方法及移动终端
CN104142659B (zh) * 2013-11-12 2017-02-15 珠海优特物联科技有限公司 一种智能家居场景切换方法及系统
CN103984579B (zh) * 2014-05-30 2018-04-13 满金标 多设备间分享当前应用程序实时运行状态的方法
CN106850940A (zh) * 2016-11-29 2017-06-13 维沃移动通信有限公司 一种状态的切换方法及移动终端
CN109597313A (zh) * 2018-11-30 2019-04-09 新华三技术有限公司 场景切换方法及装置
CN110010127A (zh) * 2019-04-01 2019-07-12 北京儒博科技有限公司 场景切换方法、装置、设备和存储介质
CN110769280A (zh) * 2019-10-23 2020-02-07 北京地平线机器人技术研发有限公司 一种接续播放文件的方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200275225A1 (en) * 2014-01-17 2020-08-27 Proctor Consulting, LLC Smart hub
CN104601694A (zh) * 2015-01-13 2015-05-06 小米科技有限责任公司 操作控制方法、终端、中继设备、智能设备及装置
CN106878762A (zh) * 2015-12-14 2017-06-20 北京奇虎科技有限公司 一种实现终端设备切换的方法、装置、服务器及系统
CN107205217A (zh) * 2017-06-19 2017-09-26 广州安望信息科技有限公司 基于智能音箱场景组网的无间断内容推送方法及系统
CN111312235A (zh) * 2018-12-11 2020-06-19 阿里巴巴集团控股有限公司 一种语音交互方法、装置及系统
CN110674482A (zh) * 2019-08-13 2020-01-10 武汉攀升鼎承科技有限公司 一种多场景应用计算机
CN112104533A (zh) * 2020-09-14 2020-12-18 深圳Tcl数字技术有限公司 场景的切换方法、终端和存储介质

Also Published As

Publication number Publication date
US20230291601A1 (en) 2023-09-14
CN112104533B (zh) 2023-02-17
GB202305357D0 (en) 2023-05-24
GB2616133A (en) 2023-08-30
JP2023541636A (ja) 2023-10-03
CN112104533A (zh) 2020-12-18

Similar Documents

Publication Publication Date Title
WO2022052864A1 (zh) 场景的切换方法、终端和存储介质
US10861461B2 (en) LED design language for visual affordance of voice user interfaces
EP3455720B1 (en) Led design language for visual affordance of voice user interfaces
CN108831448A (zh) 语音控制智能设备的方法、装置及存储介质
DE202017107611U1 (de) Bauform für kompakten Heimassistenten mit kombiniertem Schallwellenleiter und Kühlkörper
JP7393526B2 (ja) イベントクリップを提供するための、方法、電子装置、サーバシステム、およびプログラム
CN108520746A (zh) 语音控制智能设备的方法、装置及存储介质
WO2019134473A1 (zh) 语音识别系统、方法和装置
CN113728685A (zh) 用于唤醒媒体回放系统中的处理器的电源管理技术
CN111077785A (zh) 一种唤醒方法、装置、终端及存储介质
WO2023071454A1 (zh) 场景同步方法、装置、电子设备及可读存储介质
CN112433836A (zh) 应用程序自动唤醒方法、装置和计算机设备
CN109979495B (zh) 基于人脸识别的音频进度智能跟随播放方法及系统
CN112671623B (zh) 基于投影的叫醒方法、装置、投影设备和计算机存储介质
CN112735403B (zh) 一种基于智能音响的智能家居控制系统
WO2023231894A1 (zh) 基于协同纠错的唤醒方法、装置及系统、介质、设备
CN113053371A (zh) 语音控制系统和方法、语音套件、骨传导及语音处理装置
CN115035894B (zh) 一种设备响应方法和装置
CN114578705B (zh) 一种基于5g物联网智能家居控制系统
WO2024021587A1 (zh) 情景点控装置及空气调节控制系统
CN117193028A (zh) 智能设备的控制方法和控制装置
CN112997453B (zh) 根据激活的光设置选择传感器信号的目的地
CN114500493B (zh) 一种物联网设备的控制方法、终端及计算机可读存储介质
CN111048081B (zh) 一种控制方法、装置、电子设备及控制系统
CN116403575A (zh) 免唤醒语音交互方法、装置、存储介质及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21865918

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023516779

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 202305357

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20210902

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 03/07/2023)

122 Ep: pct application non-entry in european phase

Ref document number: 21865918

Country of ref document: EP

Kind code of ref document: A1