CN109597313A - Method for changing scenes and device - Google Patents

Method for changing scenes and device Download PDF

Info

Publication number
CN109597313A
CN109597313A CN201811451665.9A CN201811451665A CN109597313A CN 109597313 A CN109597313 A CN 109597313A CN 201811451665 A CN201811451665 A CN 201811451665A CN 109597313 A CN109597313 A CN 109597313A
Authority
CN
China
Prior art keywords
scene
voiceprint
user
individual scene
vocal print
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811451665.9A
Other languages
Chinese (zh)
Inventor
蒋孝来
卢丰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Technologies Co Ltd
Original Assignee
New H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd filed Critical New H3C Technologies Co Ltd
Priority to CN201811451665.9A priority Critical patent/CN109597313A/en
Publication of CN109597313A publication Critical patent/CN109597313A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The disclosure provides a kind of method for changing scenes and device, this method comprises: each voiceprint at least one voiceprint received is matched with the sample vocal print in vocal print library, determines the corresponding target user's mark of each voiceprint;According to the incidence relation between user identifier and individual scene, obtains target user and identify corresponding individual scene;Corresponding individual scene is identified according to environmental information and target user, processing obtains final individual scene;According to final individual scene, current scene is adjusted.By the way that the vocal print of user is matched with the sample vocal print in vocal print library, individual scene is chosen so as to identify according to target user corresponding with user identity, and combining environmental information is adjusted current scene, individual scene needed for determining user according to the voiceprint of user, reduce switching individual scene the time it takes, improves the efficiency and flexibility of switching individual scene.

Description

Method for changing scenes and device
Technical field
This disclosure relates to Smart Home technical field, in particular to a kind of method for changing scenes and device.
Background technique
With the continuous development of science and technology, user can control each electrical equipment in family by terminal, So that each electrical equipment becomes smart home device.
In the related technology, different kinsfolks can according to different habits to the state of each smart home device into Row setting, to generate corresponding with each kinsfolk individual scene, in each individual scene including different home at The state parameter of each smart home device of member's setting.When some user need to switch to current scene it is opposite with the user When the individual scene answered, the corresponding individual scene of the user can be first chosen, then trigger scene opening operation, to will work as Preceding scene switches to the corresponding individual scene of the user.
But if user needs first to spend part-time to individual scene when including multiple individual scenes in terminal It is selected, can just switch to corresponding individual scene after selection, so that the time that user switches individual scene is longer, Spend the time more, the problem for causing the efficiency of switching individual scene lower.
Summary of the invention
The purpose of the disclosure is, in view of the deficiency of the prior art, provides a kind of method for changing scenes and device.
The first purpose of the disclosure provides a kind of method for changing scenes, which comprises
Each voiceprint at least one voiceprint received is matched with the sample vocal print in vocal print library, Determine each voiceprint corresponding target user mark, wherein include: in the vocal print library at least one sample vocal print, with And each associated user identifier of sample vocal print;
According to the incidence relation between user identifier and individual scene, obtains the target user and identify corresponding individual character Change scene;
Corresponding individual scene is identified according to environmental information and the target user, processing obtains final personalization Scene, the environmental information include following one or more: temporal information, season information and Weather information;
According to the final individual scene, current scene is adjusted.
Optionally, described to identify corresponding individual scene according to environmental information and the target user, processing obtains Final individual scene, comprising:
If it includes two or more individual scenes that the target user, which identifies corresponding individual scene, according to every A target user identifies corresponding weight and described two or more than two individual scenes are combined, and it is personalized to obtain combination Scene;
The combination individual scene is adjusted according to the environmental information, obtains the final individual scene.
Optionally, described to identify corresponding individual scene according to environmental information and the target user, processing obtains Final individual scene, comprising:
According to the Weather information and/or season information in the environmental information, to target user mark corresponding Property scene in the parameter of temperature equipment be adjusted, obtain the final individual scene, the temperature equipment is for adjusting Save room temperature;
And/or according to the temporal information in the environmental information, corresponding individual scene is identified to the target user In the parameter of lighting apparatus be adjusted, obtain the final individual scene.
Optionally, in each voiceprint in described at least one voiceprint that will be received and the sample in vocal print library Vocal print is matched, before determining the corresponding target user's mark of each voiceprint, the method also includes:
Obtain at least one voice messaging;
Judge whether each voice messaging includes waking up the corresponding wake-up voice messaging of word;
If the voice messaging includes the corresponding wake-up voice messaging of the wake-up word, using the wake-up voice messaging as The voiceprint.
Optionally, before the acquisition target user identifies corresponding individual scene, the method also includes:
If each sample vocal print in any one voiceprint at least one described voiceprint, with the vocal print library It mismatches, then generates prompt information, the prompt information is for prompting user's typing voiceprint;
Receive the voiceprint of user's typing;
The voiceprint of user's typing is matched again with each sample vocal print in the vocal print library, is determined The corresponding target user's mark of the voiceprint of user's typing.
Optionally, after the generation prompt information, the method also includes:
If not receiving the voiceprint of user's typing in the first preset duration, default scene is obtained;
According to the default scene, the current scene is adjusted.
Optionally, described according to the final individual scene, after being adjusted to current scene, the method is also Include:
If detecting, the Weather information updates, corresponding to the final individual scene according to updated Weather information Data be updated, obtain updated final individual scene;
According to the updated final individual scene, the corresponding scene of the final individual scene is adjusted It is whole.
Optionally, corresponding individual scene is identified according to environmental information and the target user described, handled To after final individual scene, the method also includes:
After the second preset duration, the corresponding data of at least one final individual scene are obtained;
According to the corresponding data of each final individual scene, the corresponding data of each individual scene are updated.
The another object of the disclosure is to provide a kind of device for changing scenes, and described device includes:
First matching module, in each voiceprint at least one voiceprint and vocal print library for that will receive Sample vocal print is matched, and determines each voiceprint corresponding target user mark, wherein include: in the vocal print library to A few sample vocal print and each associated user identifier of sample vocal print;
First obtains module, for obtaining the target according to the incidence relation between user identifier and individual scene The corresponding individual scene of user identifier;
Processing module is handled for identifying corresponding individual scene according to environmental information and the target user To final individual scene, the environmental information includes following one or more: temporal information, season information and Weather information;
The first adjustment module, for being adjusted to current scene according to the final individual scene.
Optionally, the processing module, if being specifically used for the target user to identify corresponding individual scene including two A or more than two individual scenes identify corresponding weight and described two or more than two individual characteies according to each target user Change scene to be combined, obtains combination individual scene;The combination individual scene is adjusted according to the environmental information It is whole, obtain the final individual scene.
Optionally, the processing module, also particularly useful for according to the Weather information and/or season letter in the environmental information Breath, the parameter for identifying the temperature equipment in corresponding individual scene to the target user are adjusted, and are obtained described final Individual scene, the temperature equipment is for adjusting room temperature;It is right and/or according to the temporal information in the environmental information The parameter that the target user identifies the lighting apparatus in corresponding individual scene is adjusted, and obtains the final personalization Scene.
Optionally, described device further include:
Second obtains module, for obtaining at least one voice messaging;
Judgment module, for judging whether each voice messaging includes waking up the corresponding wake-up voice messaging of word;
Determining module is called out if including the corresponding wake-up voice messaging of the wake-up word for the voice messaging by described Voice messaging wake up as the voiceprint.
Optionally, described device further include:
Generation module, if in any one voiceprint, with the vocal print library at least one described voiceprint Each sample vocal print mismatch, then generate prompt information, the prompt information is for prompting user's typing voiceprint;
Receiving module, for receiving the voiceprint of user's typing;
Second matching module, for by each sample vocal print in the voiceprint of user's typing and the vocal print library It is matched again, determines the corresponding target user's mark of the voiceprint of user's typing.
Optionally, described device further include:
Third obtains module, if the voiceprint for not receiving user's typing in the first preset duration, Obtain default scene;
Second adjustment module, for being adjusted to the current scene according to the default scene.
Optionally, described device further include:
First update module, if for detecting that the Weather information updates, according to updated Weather information to described The final corresponding data of individual scene are updated, and obtain updated final individual scene;
Third adjusts module, is used for according to the updated final individual scene, to the final individual scene Corresponding scene is adjusted.
Optionally, described device further include:
4th obtains module, for it is corresponding to obtain at least one final individual scene after the second preset duration Data;
Second update module, for the corresponding data of each final individual scene of basis, to each individual scene pair The data answered are updated.
The beneficial effect of the disclosure is:
The method for changing scenes and device that the embodiment of the present disclosure provides, by least one voiceprint that will receive Each voiceprint is matched with the sample vocal print in vocal print library, determines the corresponding target user's mark of each voiceprint, And according to the incidence relation between user identifier and individual scene, obtains target user and identify corresponding individual scene, then Corresponding individual scene is identified according to environmental information and target user, processing obtains final individual scene, last basis Final individual scene, is adjusted current scene.By the way that the sample vocal print in the vocal print of user and vocal print library is carried out Match, so that it is determined that target user corresponding with user identity identifies, can be identified according to target user and choose individual scene, And combining environmental information is adjusted current scene, user, can basis without selecting from multiple individual scenes Individual scene needed for the voiceprint of user determines user reduces switching individual scene the time it takes, improves The efficiency and flexibility of switching individual scene.
Further, individual scene is adjusted by combining environmental information, so that final personalization adjusted Scene, can on the basis of meeting user demand, according to current scene corresponding environment, such as time, season and weather etc., Further adjustment is carried out to individual scene, final individual scene is enabled to be more in line with the expectation of user, improves and uses Family viscosity.
Detailed description of the invention
It, below will be to needed in the embodiment attached in order to illustrate more clearly of the technical solution of the embodiment of the present disclosure Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the disclosure, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Environment schematic involved in a kind of method for changing scenes that Fig. 1 provides for the disclosure;
Fig. 2 is the flow diagram for the method for changing scenes that one embodiment of the disclosure provides;
Fig. 3 is the flow diagram for the method for changing scenes that another embodiment of the disclosure provides;
Fig. 4 is the schematic diagram for the device for changing scenes that one embodiment of the disclosure provides;
Fig. 5 is the schematic diagram for the device for changing scenes that another embodiment of the disclosure provides;
Fig. 6 is the schematic diagram for the device for changing scenes that the another embodiment of the disclosure provides;
Fig. 7 is the schematic diagram for the device for changing scenes that the another embodiment of the disclosure provides;
Fig. 8 is the schematic diagram for the device for changing scenes that the another embodiment of the disclosure provides;
Fig. 9 is the schematic diagram for the device for changing scenes that the another embodiment of the disclosure provides
Figure 10 is the schematic diagram for the device for changing scenes that one embodiment of the disclosure provides.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present disclosure clearer, below in conjunction with the embodiment of the present disclosure In attached drawing, the technical solution in the embodiment of the present disclosure is clearly and completely described, it is clear that described embodiment is Disclosure a part of the embodiment, instead of all the embodiments.
Environment schematic involved in a kind of method for changing scenes that Fig. 1 provides for the disclosure, as shown in Figure 1, the environment Schematic diagram may include: voice control device 110, terminal 120 and at least one smart home 130.
Wherein, voice control device 110 is connect with terminal 120 and each smart home 130 respectively.For example, can pass through The mode of WiFi (Wireless Fidelity, Wireless Fidelity) connects.
Specifically, voice control device 110 can receive the voice messaging of user's sending, and mention to voice messaging It takes, obtains the voiceprint in voice messaging, then by multiple sample vocal prints in the voiceprint and pre-stored vocal print library It is matched, obtains sample voiceprint corresponding with voiceprint, thus by being marked with the user of the sample voiceprint management Know and is identified as target user.
Correspondingly, voice control device 110 can then be identified according to determining target user, according to pre-set user Identify the incidence relation between individual scene, corresponding with target user's mark individual scene is determined, in conjunction with working as Preceding environmental information handles determining individual scene, obtains final individual scene.
Finally, voice control device 110 can be according to the data in final individual scene, to each smart home 130 Open and close state be adjusted, and operating parameter of each smart home 130 when in the open state is adjusted, from And realize the voice messaging handoff scenario issued according to user.
In addition, user can preset the individual scene of meet demand by terminal 120, for example, terminal 120 can be with The setting page for providing a user individual scene, in the page, the operation that terminal can be triggered according to user, to scene name Title, user's name, the opening and closing degree of curtain, demand temperature, demand humidity, including king light, shot-light, corridor lamp and background light The open and close state and background music of each lighting apparatus, moreover, user can also be in the interface, the sound of typing user Line information.
It should be noted that voice control device 110 can be mobile phone, computer, digital broadcast terminal, message receipts Send out equipment, game console, tablet device, Medical Devices, body-building equipment or personal digital assistant, the embodiment of the present disclosure to this not It limits.
Fig. 2 is the flow diagram for the method for changing scenes that one embodiment of the disclosure provides, applied to language as shown in Figure 1 Sound control control equipment, as shown in Fig. 2, this method comprises:
Step 201, by least one voiceprint received each voiceprint and vocal print library in sample vocal print It is matched, determines the corresponding target user's mark of each voiceprint.
It wherein, may include: at least one sample vocal print and the associated user's mark of each sample vocal print in the vocal print library Know.
In order to provide different individual scenes, voice control according to the user of the different demands of different user, Xiang Butong Equipment can search target user's mark corresponding with voiceprint according to the voiceprint of different user in current environment, with Just in the next steps, adjustment individual scene can be identified according to target user.
Therefore, voice control device can first receive user sending voice messaging, and at least one voice messaging into Row feature extraction obtains at least one voiceprint, and will extract at least one obtained voiceprint and the progress of sample vocal print It matches, the mark of target user corresponding to the sample vocal print to be matched.
It specifically, can be to received language if voice control device only receives the voice messaging of user sending Message breath extracts, and obtains voiceprint, and by each sample vocal print in the voiceprint and pre-stored vocal print library Matched, determine in each sample vocal print whether include with the consistent vocal print of the voiceprint, thus by consistent sample User identifier corresponding to vocal print is identified as target user.
It should be noted that in practical applications, voice control device can receive the voice messaging that multiple users issue, Obtain at least one voiceprint to extract, then after being matched at least one voiceprint with each sample vocal print, If vocal print indicated by multiple voiceprints, can be by matched each sample sound with the sample voice print matching in vocal print library The corresponding user identifier of line is used as target user to identify.
Step 202, according to the incidence relation between user identifier and individual scene, it is corresponding to obtain target user's mark Individual scene.
It, then can be according to the pass between pre-set user identifier and individual scene after determining target user's mark Connection relationship obtains corresponding with target user's mark individual scene, can be according to the personalization field so as in the next steps Scape is adjusted current scene.
Specifically, voice control device can be with being associated between user identifier according to the pre-stored data and individual scene System, searches from multiple users of incidence relation, obtains identifying the user identifier that matches with target user, then by matched use Family identifies corresponding individual scene, the individual scene to match is identified as with target user, to obtain the personalization Data corresponding to scene.
It should be noted that the incidence relation between user identifier and individual scene pre-establishes, such as user The working condition of each smart home in individual scene is arranged, to obtain in the application program that can be loaded by terminal Meet the individual scene of user demand, terminal then can establish user identifier and individual character in conjunction with the user identifier of user's input Change the incidence relation between scene.
Step 203 identifies corresponding individual scene according to environmental information and target user, and processing obtains final Property scene.
Wherein, which may include following one or more: temporal information, season information and Weather information.
Voice control device not only can select personalization corresponding with the demand of user according to the voiceprint of user Scene can be combined with current environmental information and be adjusted to individual scene, obtains final individual scene.
Specifically, voice control device can first obtain environmental information, and according to the temporal information in environmental information, season Information and Weather information, to needing the smart home being configured according to time, season and weather to adjust in individual scene It is whole, the final individual scene after being adjusted.
For example, needing to close light, and the temperature setting needed is 23 DEG C (Celsius in the individual scene of user setting Degree), and the temporal information instruction current time in environmental information is 19:00, season information instruction is currently summer, Weather information Instruction is currently thunder shower, then voice control device can control in individual scene according to temporal information and Weather information Light is opened, and is controlled the air-conditioning in smart home according to Weather information and closed, and the temperature in current scene is not adjusted.
Step 204, according to final individual scene, current scene is adjusted.
Voice control device, can be according to intelligence each in final individual scene after adjustment obtains final individual scene Can household open and close state and it is in the open state when operating parameter, the unlatching to smart home each in current scene Closed state and operating parameter are adjusted, to switch to the individual scene to match with user demand.
Specifically, for each smart home in current scene, voice control device can first obtain smart home Open and close state and it is in the open state when operating parameter, and in final individual scene the smart home unlatching close Closed state and it is in the open state when operating parameter be compared, if the two is consistent, without being adjusted to the smart home It is whole;But if the two is inconsistent, needs the open and close state according to the smart home in final individual scene and be in Operating parameter when open state is adjusted the smart home.
It is corresponding, after adjusting using aforesaid way to each smart home in scene, then realize scene Switching, so that the individual scene after switching is to meet the desired individual scene of user.
But in practical applications, in the corresponding data of final individual scene, not to smart home each in scene Operating status and operating parameter be configured, therefore, voice control device is detecting final individual scene not to certain It, can be according to the corresponding open and close state of the smart home in default scene and in opening when a smart home is configured When opening state, smart home is adjusted.
Meanwhile voice control device can also remind user not to be configured to the smart home, and detect user's touching The operation of hair can be adjusted smart home according to the operation of triggering if detecting the operation of user's triggering, so that adjusting Individual scene after whole is more in line with the expectation of user.
In conclusion the method for changing scenes that the embodiment of the present disclosure provides, passes through at least one voiceprint that will be received In each voiceprint matched with the sample vocal print in vocal print library, determine each voiceprint corresponding target user mark Know, and according to the incidence relation between user identifier and individual scene, obtain target user and identify corresponding individual scene, Corresponding individual scene is identified further according to environmental information and target user, processing obtains final individual scene, last root According to final individual scene, current scene is adjusted.By the way that the sample vocal print in the vocal print of user and vocal print library is carried out Matching can identify according to target user so that it is determined that target user corresponding with user identity identifies and choose personalized field Scape, and combining environmental information is adjusted current scene, user from multiple individual scenes without selecting, Ji Kegen Individual scene needed for determining user according to the voiceprint of user reduces switching individual scene the time it takes, mentions The high efficiency and flexibility of switching individual scene.
Further, individual scene is adjusted by combining environmental information, so that final personalization adjusted Scene, can on the basis of meeting user demand, according to current scene corresponding environment, such as time, season and weather etc., Further adjustment is carried out to individual scene, final individual scene is enabled to be more in line with the expectation of user, improves and uses Family viscosity.
Fig. 3 is the flow diagram for the method for changing scenes that another embodiment of the disclosure provides, applied to as shown in Figure 1 Voice control device, as shown in figure 3, this method comprises:
Step 301 obtains at least one voice messaging.
During handoff scenario, voice control device can be determined according to the voiceprint of user issues voice messaging User corresponding to user identifier, to obtain corresponding with user identifier individual scene, realization is according to user's Switching of the vocal print to scene.
Therefore, voice control device can first obtain the voice messaging of user's sending, so as in the next steps, Ke Yigen It extracts to obtain the corresponding voiceprint of user according to the voice messaging, to be adjusted current scene according to the voiceprint.
Moreover, in practical applications, user may continual sending voice messaging, and may in current scene Including multiple users, therefore, at least one voice messaging that at least one available user of voice control device issues.
Step 302 judges whether each voice messaging includes waking up the corresponding wake-up voice messaging of word.
Since voice control device can receive a large amount of voice messaging, voice control device is needed from each voice Searching in information includes the voice messaging for waking up voice messaging, in the next steps, which to be made For voiceprint.
Specifically, voice control device can identify each voice messaging, judge in each voice messaging whether Including waking up voice messaging, if including waking up voice messaging, illustrate that user needs to wake up voice control device, so as to voice control Control equipment is adjusted current scene.
But if not including waking up voice messaging in voice messaging, illustrate that user is carrying out normal dialogue communication, and Voice control device is not waken up, also there is no the expectations switched over to current scene, therefore voice control device is not necessarily to root According to not including the voice messaging for waking up voice messaging, current scene is adjusted.
If step 303, voice messaging include waking up the corresponding wake-up voice messaging of word, voice messaging will be waken up as vocal print Information.
If voice control device detects that any one voice messaging in multiple voice messagings includes waking up voice messaging, Then can be using the wake-up voice messaging in the voice messaging as voiceprint, so as in the next steps, it can be according to vocal print Information determines that the target user for indicating user identity identifies.
It should be noted that if there are multiple voice messagings to include wake-up voice messaging at least one voice messaging, then It can be respectively using the wake-up voice messaging in different phonetic information as different voiceprints, to indicate that multiple users need to call out Voice control device of waking up is adjusted current scene.
For example, waking up word is " wheat ", if voice control device receives the voice letter that adult issues in current scene It ceases " wheat switches to home mode ", while also receiving the voice messaging " wheat opens entertainment mode " of children's sending, then The wake-up voice messaging " wheat " that the wake-up voice messaging " wheat " and children that adult can be issued issue is used as vocal print Information.
Step 304, by least one voiceprint received each voiceprint and vocal print library in sample vocal print It is matched, determines the corresponding target user's mark of each voiceprint.
It wherein, may include: at least one sample vocal print and the associated user's mark of each sample vocal print in the vocal print library Know.
This step 304 is similar with step 201, and details are not described herein.
It should be noted that in practical applications, the sample vocal print in vocal print library may be matched with the voiceprint of acquisition, It may also be mismatched with the voiceprint of acquisition, then voice control device can execute different according to different matching results Operation, if voiceprint and the success of sample voice print matching, can execute step 308;But if voiceprint and sample vocal print It fails to match, then can execute step 305.
If each sample vocal print in step 305, at least one voiceprint in any one voiceprint, with vocal print library It mismatches, then generates prompt information.
Wherein, the prompt information is for prompting user's typing voiceprint.
If each sample vocal print at least one voiceprint obtained in each voiceprint and vocal print library is not Match, then illustrate in voice control device and the voiceprint of not stored user, it is also possible to illustrate voice control device and unidentified The sample vocal print to match out with the voiceprint.
Therefore, prompt information can be generated in voice control device, to remind user to re-type voiceprint.For example, should Prompt information can be voice messaging, remind user's recognition failures by the loudspeaker of voice control device, and re-type sound Line information;The prompt information can also be text information, show recognition failures to user by the display screen of voice control device, And voiceprint is re-typed, it is, of course, also possible to generate other kinds of prompt information, the embodiment of the present disclosure does not limit this It is fixed.
Correspondingly, user can re-type voiceprint, then voice control device can execute according to the prompt information Step 306;But if voice control device does not receive voice messaging in the first preset duration, illustrate that user does not record Enter voiceprint, then can execute step 311.
Step 306, the voiceprint for receiving user's typing.
This step 306 is similar with step 301, and details are not described herein.
Step 307 matches the voiceprint of user's typing with each sample vocal print in vocal print library again, determines The corresponding target user's mark of the voiceprint of user's typing.
This step 307 is similar with step 201, and details are not described herein.
It should be noted that if voice control device still cannot match to obtain relatively according to the voiceprint of user's typing The target user's mark answered, then may remind the user that, so that identity of the user to voice control device input user, thus to sound Line library is updated.
For example, voice control device can put question to " may I ask your identity is? " to user, then user can believe according to puing question to Breath inputs " male owner " to voice control device, then voice control device can according to " male owner " corresponding user identifier, The voiceprint of user's typing is replaced in vocal print library to the associated voiceprint of the user identifier.
Step 308, according to the incidence relation between user identifier and individual scene, it is corresponding to obtain target user's mark Individual scene.
This step 308 is similar with step 202, and details are not described herein.
Step 309 identifies corresponding individual scene according to environmental information and target user, and processing obtains final Property scene.
Wherein, which may include following one or more: temporal information, season information and Weather information.
In order to enable individual scene adjusted more meets the expectation of user, the available environment letter of voice control device Breath, and corresponding individual scene is identified to target user according to environmental information and is adjusted, the final individual character after being adjusted Change scene.
Moreover, in practical applications, voice control device can receive the voiceprint of multiple users, to obtain multiple Individual scene, in order to meet the needs of different user, voice control device can be combined multiple individual scenes, then The individual scene that combining environmental information obtains combination is adjusted, and obtains final individual scene.
Therefore, step 309 may include step 309a:
If it includes two or more individual scenes that step 309a, target user, which identifies corresponding individual scene, Corresponding weight is identified according to each target user and two or more individual scenes are combined, and obtains combination individual character Change scene, combination individual scene is adjusted according to environmental information, obtains final individual scene.
It specifically, can be according to each individual scene if voice control device acquires multiple individual scenes Weight corresponding to said target user identifier is weighted combination to multiple individual scenes, obtains combination individual scene.
For example, it is 0.7 that target user, which identifies " male owner " corresponding weight, and target user identifies " children " corresponding power Weight is 0.3, if target user identifies in " children " corresponding individual scene, the temperature needed is 18 DEG C, and target user marks Know in " male owner " corresponding individual scene, the temperature needed is 23 DEG C, then the temperature obtained according to weight calculation can be 23*0.7+18*0.3=21.5 DEG C.
Further, environmental information may include at least one in temporal information, season information and Weather information, then language Sound control control equipment can carry out different adjustment to individual scene according to different environmental informations.Therefore, step 309 may be used also To include step 309b:
Step 309b, according to the Weather information and/or season information in environmental information, to target user's mark corresponding Property scene in the parameter of temperature equipment be adjusted, obtain final individual scene, temperature equipment is for adjusting Indoor Temperature Degree;And/or according to the temporal information in environmental information, the lighting apparatus in corresponding individual scene is identified to target user Parameter be adjusted, obtain final individual scene.
Voice control device can be according to Weather information and/or season information, to the ginseng of temperature equipment in individual scene Number is adjusted, if Weather information and/or season information instruction current temperature are lower, and needs to turn down in individual scene current Temperature in scene, the then current temperature that can be indicated according to Weather information and/or season information, temperature equipment is adjusted to close Closed state, or improve the temperature that temperature equipment needs to adjust.
Similar, if the temporal information instruction in environmental information is currently night 21:00, and indicate to close in individual scene Lighting apparatus is closed, then voice control device can determine that current time is the moment at night, need to open illumination lighting apparatus, then may be used Lighting apparatus in individual scene is adjusted to open state.
Step 310, according to final individual scene, current scene is adjusted.
This step 310 is similar with step 204, and details are not described herein.
If step 311, the voiceprint for not receiving user's typing in the first preset duration, obtain default scene.
If voice control device in the first preset duration, does not receive the voiceprint of user's typing, then illustrate to use The prompt information that voice control device generates simultaneously is known in family, then available pre-set default scene, so as in subsequent step In rapid, current scene is adjusted according to the default scene.
It should be noted that the default scene can be user setting, or the highest personalization of frequency of use Scene, can also for the individual scene that matches in environmental information with current season, can also be for by other means really Fixed individual scene, the embodiment of the present disclosure do not limit this.
Step 312, according to default scene, current scene is adjusted.
This step 312 is similar with step 204, and details are not described herein.
Step 313, according to updated Weather information, the corresponding scene of final individual scene is adjusted.
Current scene is adjusted in voice control device, after obtaining the corresponding scene of final individual scene, if Detect that the Weather information in environmental information updates, then it can be according to updated Weather information, to final personalized field The corresponding scene of scape is adjusted, so that scene adjusted more meets current Weather information.
It optionally, can be according to updated Weather information to most if voice control device detects that Weather information updates The corresponding data of whole individual scene are updated, and obtain updated final individual scene, further according to updated final Individual scene is adjusted the corresponding scene of final individual scene.
Specifically, if voice control device detects that Weather information updates, can first according to updated Weather information, Data associated with weather in final individual scene are updated, obtain updated final individual scene, then adopt With the mode similar with step 204, the operating status and operating parameter of smart home in current scene are adjusted.
For example, it is corresponding with the citing in step 203, if voice control device detects Weather information by " thunder shower " It is changed into " fine day ", then the air-conditioning in smart home can be adjusted to open state by closed state.
Step 314 is updated the corresponding data of each individual scene.
In order to further reduce the number that user changes the state of each smart home in individual scene, voice control Equipment can be according to the corresponding individual scene of each user, and the final individual scene that combining environmental information adjusts, The individual scene of each user is updated.
Optionally, after the second preset duration, at least one available final individual scene of voice control device Corresponding data, and according to the corresponding data of each final individual scene, the corresponding data of each individual scene are carried out It updates.
Wherein, which can be a week, or one month, can be with other times length, the disclosure Embodiment does not limit this.
It should be noted that in practical applications, the personalized field for meeting user demand can be arranged in user by terminal Scape, the operation that terminal can then be triggered according to user, establishes the corresponding relationship between the individual scene of setting and user identifier. Wherein, user identifier user indicates the identity of user, such as can be " the male owner " of user's input, or user is defeated The name entered can also be used to indicate that the information of user identity, the embodiment of the present disclosure not to limit this for other.
Further, terminal then may remind the user that input wakes up voice messaging after detecting user setting, and The wake-up voice messaging of user's input is received, then carries out feature extraction to voice messaging is waken up, obtains the voiceprint of user, and The incidence relation between voiceprint and user identifier is established, finally can send voiceprint and user to voice control device The corresponding data of incidence relation, individual scene and the corresponding voiceprint of user between mark, so as in voice control Equipment adjust be currently located scene when, can the user identifier according to corresponding to the voiceprint of user, according to user identifier Corresponding individual scene is adjusted the scene being currently located, and meets the individual demand of user.
For example, individual scene can be arranged by the APP (Application, application program) that terminal is loaded in user Humidity, temperature, music, the operating status and operating parameter of light and other smart homes, terminal then can be to voice control Equipment sends data corresponding to individual scene.
It should be noted that user can not be arranged in individual scene by terminal by voice control device The corresponding data of each smart home, and can receive the wake-up voice messaging of user's input, extraction obtains waking up voice letter Voiceprint in breath, to establish the incidence relation between voiceprint and the user identifier of individual scene owning user.
In conclusion the method for changing scenes that the embodiment of the present disclosure provides, passes through at least one voiceprint that will be received In each voiceprint matched with the sample vocal print in vocal print library, determine each voiceprint corresponding target user mark Know, and according to the incidence relation between user identifier and individual scene, obtain target user and identify corresponding individual scene, Corresponding individual scene is identified further according to environmental information and target user, processing obtains final individual scene, last root According to final individual scene, current scene is adjusted.By the way that the sample vocal print in the vocal print of user and vocal print library is carried out Matching can identify according to target user so that it is determined that target user corresponding with user identity identifies and choose personalized field Scape, and combining environmental information is adjusted current scene, user from multiple individual scenes without selecting, Ji Kegen Individual scene needed for determining user according to the voiceprint of user reduces switching individual scene the time it takes, mentions The high efficiency and flexibility of switching individual scene.
Further, individual scene is adjusted by combining environmental information, so that final personalization adjusted Scene, can on the basis of meeting user demand, according to current scene corresponding environment, such as time, season and weather etc., Further adjustment is carried out to individual scene, final individual scene is enabled to be more in line with the expectation of user, improves and uses Family viscosity.
Fig. 4 is the schematic diagram for the device for changing scenes that one embodiment of the disclosure provides, as shown in figure 4, the device specifically wraps It includes:
First matching module 401, each voiceprint at least one voiceprint and vocal print library for that will receive In sample vocal print matched, determine each voiceprint corresponding target user mark, wherein include: in the vocal print library At least one sample vocal print and each associated user identifier of sample vocal print;
First obtains module 402, for obtaining the target according to the incidence relation between user identifier and individual scene The corresponding individual scene of user identifier;
Processing module 403, for identifying corresponding individual scene, processing according to environmental information and the target user Final individual scene is obtained, which includes following one or more: temporal information, season information and Weather information;
The first adjustment module 404, for being adjusted to current scene according to the final individual scene.
Optionally, processing module 403, if being specifically used for the target user to identify corresponding individual scene including two Or more than two individual scenes, corresponding weight and two or more personalized fields are identified according to each target user Scape is combined, and obtains combination individual scene;The combination individual scene is adjusted according to the environmental information, is somebody's turn to do Final individual scene.
Optionally, processing module 403, also particularly useful for according to the Weather information and/or season letter in the environmental information Breath, the parameter for identifying the temperature equipment in corresponding individual scene to the target user are adjusted, and obtain the final individual character Change scene, the temperature equipment is for adjusting room temperature;And/or according to the temporal information in the environmental information, which is used The parameter that family identifies the lighting apparatus in corresponding individual scene is adjusted, and obtains the final individual scene.
Optionally, referring to Fig. 5, the device further include:
Second obtains module 405, for obtaining at least one voice messaging;
Judgment module 406, for judging whether each voice messaging includes waking up the corresponding wake-up voice messaging of word;
Determining module 407, if including the corresponding wake-up voice messaging of the wake-up word for the voice messaging, by the wake-up language Message breath is used as the voiceprint.
Optionally, referring to Fig. 6, the device further include:
Generation module 408, if in any one voiceprint, with the vocal print library at least one voiceprint Each sample vocal print mismatches, then generates prompt information, prompt information is for prompting user's typing voiceprint;
Receiving module 409, for receiving the voiceprint of user's typing;
Second matching module 410, for by each sample vocal print in the voiceprint of user's typing and the vocal print library It is matched again, determines the corresponding target user's mark of the voiceprint of user's typing.
Optionally, referring to Fig. 7, the device further include:
Third obtains module 411, if the voiceprint for not receiving user's typing in the first preset duration, Obtain default scene;
Second adjustment module 412, for being adjusted to the current scene according to the default scene.
Optionally, referring to Fig. 8, the device further include:
First update module 413, if for detect the Weather information update, according to updated Weather information to this most The corresponding data of whole individual scene are updated, and obtain updated final individual scene;
Third adjusts module 414, is used for according to the updated final individual scene, to the final individual scene pair The scene answered is adjusted.
Optionally, referring to Fig. 9, the device further include:
4th obtains module 415, corresponding for after the second preset duration, obtaining at least one final individual scene Data;
Second update module 416, for the corresponding data of each final individual scene of basis, to each individual scene Corresponding data are updated.
In conclusion the device for changing scenes that the embodiment of the present disclosure provides, passes through at least one voiceprint that will be received In each voiceprint matched with the sample vocal print in vocal print library, determine each voiceprint corresponding target user mark Know, and according to the incidence relation between user identifier and individual scene, obtain target user and identify corresponding individual scene, Corresponding individual scene is identified further according to environmental information and target user, processing obtains final individual scene, last root According to final individual scene, current scene is adjusted.By the way that the sample vocal print in the vocal print of user and vocal print library is carried out Matching can identify according to target user so that it is determined that target user corresponding with user identity identifies and choose personalized field Scape, and combining environmental information is adjusted current scene, user from multiple individual scenes without selecting, Ji Kegen Individual scene needed for determining user according to the voiceprint of user reduces switching individual scene the time it takes, mentions The high efficiency and flexibility of switching individual scene.
Further, individual scene is adjusted by combining environmental information, so that final personalization adjusted Scene, can on the basis of meeting user demand, according to current scene corresponding environment, such as time, season and weather etc., Further adjustment is carried out to individual scene, final individual scene is enabled to be more in line with the expectation of user, improves and uses Family viscosity.
The method that above-mentioned apparatus is used to execute previous embodiment offer, it is similar that the realization principle and technical effect are similar, herein not It repeats again.
The above module can be arranged to implement one or more integrated circuits of above method, such as: one Or multiple specific integrated circuits (Application Specific Integrated Circuit, abbreviation ASIC), or, one Or multi-microprocessor (digital singnal processor, abbreviation DSP), or, one or more field programmable gate Array (Field Programmable Gate Array, abbreviation FPGA) etc..For another example, when some above module passes through processing elements When the form of part scheduler program code is realized, which can be general processor, such as central processing unit (Central Processing Unit, abbreviation CPU) or it is other can be with the processor of caller code.For another example, these modules can integrate Together, it is realized in the form of system on chip (system-on-a-chip, abbreviation SOC).
Figure 10 is the schematic diagram for the device for changing scenes that one embodiment of the disclosure provides, which can integrate sets in terminal Standby or terminal device chip, the terminal can be the calculating equipment for having scene switching function.
The device includes: memory 1001, processor 1002.
For memory 1001 for storing program, the program that processor 1002 calls memory 1001 to store is above-mentioned to execute Embodiment of the method.Specific implementation is similar with technical effect, and which is not described herein again.
Optionally, the disclosure also provides a kind of program product, such as computer readable storage medium, including program, the journey Sequence is when being executed by processor for executing above method embodiment.
In several embodiments provided by the disclosure, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.Another point, it is shown or discussed Mutual coupling, direct-coupling or communication connection can be through some interfaces, the INDIRECT COUPLING or logical of device or unit Letter connection can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the disclosure can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list Member both can take the form of hardware realization, can also realize in the form of hardware adds SFU software functional unit.
The above-mentioned integrated unit being realized in the form of SFU software functional unit can store and computer-readable deposit at one In storage media.Above-mentioned SFU software functional unit is stored in a storage medium, including some instructions are used so that a computer Equipment (can be personal computer, server or the network equipment etc.) or processor (English: processor) execute this public affairs Open the part steps of each embodiment the method.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (English: Read-Only Memory, abbreviation: ROM), random access memory (English: Random Access Memory, letter Claim: RAM), the various media that can store program code such as magnetic or disk.

Claims (16)

1. a kind of method for changing scenes, which is characterized in that the described method includes:
Each voiceprint at least one voiceprint received is matched with the sample vocal print in vocal print library, is determined The corresponding target user's mark of each voiceprint, wherein include: at least one sample vocal print in the vocal print library and every A associated user identifier of sample vocal print;
According to the incidence relation between user identifier and individual scene, the corresponding personalized field of target user's mark is obtained Scape;
Corresponding individual scene is identified according to environmental information and the target user, processing obtains final individual scene, The environmental information includes following one or more: temporal information, season information and Weather information;
According to the final individual scene, current scene is adjusted.
2. the method according to claim 1, wherein described mark according to environmental information and the target user Know corresponding individual scene, processing obtains final individual scene, comprising:
If it includes two or more individual scenes that the target user, which identifies corresponding individual scene, according to each mesh The mark corresponding weight of user identifier and described two or more than two individual scenes are combined, and obtain combining personalized field Scape;
The combination individual scene is adjusted according to the environmental information, obtains the final individual scene.
3. the method according to claim 1, wherein described mark according to environmental information and the target user Know corresponding individual scene, processing obtains final individual scene, comprising:
According to the Weather information and/or season information in the environmental information, corresponding personalization is identified to the target user The parameter of temperature equipment in scene is adjusted, and obtains the final individual scene, and the temperature equipment is used for regulation room Interior temperature;
And/or according to the temporal information in the environmental information, the target user is identified in corresponding individual scene The parameter of lighting apparatus is adjusted, and obtains the final individual scene.
4. the method according to claim 1, wherein every in described at least one voiceprint that will be received A voiceprint is matched with the sample vocal print in vocal print library, determines that the corresponding target user of each voiceprint identifies it Before, the method also includes:
Obtain at least one voice messaging;
Judge whether each voice messaging includes waking up the corresponding wake-up voice messaging of word;
If the voice messaging includes the corresponding wake-up voice messaging of the wake-up word, using the wake-up voice messaging as described in Voiceprint.
5. the method according to claim 1, wherein obtaining the corresponding individual character of target user's mark described Before changing scene, the method also includes:
If each sample vocal print in any one voiceprint at least one described voiceprint, with the vocal print library is not Matching then generates prompt information, and the prompt information is for prompting user's typing voiceprint;
Receive the voiceprint of user's typing;
The voiceprint of user's typing is matched again with each sample vocal print in the vocal print library, described in determination The corresponding target user's mark of the voiceprint of user's typing.
6. according to the method described in claim 5, it is characterized in that, the method is also wrapped after the generation prompt information It includes:
If not receiving the voiceprint of user's typing in the first preset duration, default scene is obtained;
According to the default scene, the current scene is adjusted.
7. method according to any one of claims 1 to 6, which is characterized in that described according to the final individual scene, After being adjusted to current scene, the method also includes:
If detecting, the Weather information updates, according to updated Weather information to the corresponding number of the final individual scene According to being updated, updated final individual scene is obtained;
According to the updated final individual scene, the corresponding scene of the final individual scene is adjusted.
8. method according to any one of claims 1 to 6, which is characterized in that described according to environmental information and the mesh The corresponding individual scene of user identifier is marked, after processing obtains final individual scene, the method also includes:
After the second preset duration, the corresponding data of at least one final individual scene are obtained;
According to the corresponding data of each final individual scene, the corresponding data of each individual scene are updated.
9. a kind of device for changing scenes, which is characterized in that described device includes:
First matching module, each voiceprint at least one voiceprint for that will receive and the sample in vocal print library Vocal print is matched, and determines the corresponding target user's mark of each voiceprint, wherein include: at least one in the vocal print library A sample vocal print and each associated user identifier of sample vocal print;
First obtains module, for obtaining the target user according to the incidence relation between user identifier and individual scene Identify corresponding individual scene;
Processing module, for identifying corresponding individual scene according to environmental information and the target user, processing is obtained most Whole individual scene, the environmental information include following one or more: temporal information, season information and Weather information;
The first adjustment module, for being adjusted to current scene according to the final individual scene.
10. device according to claim 9, which is characterized in that the processing module, if being specifically used for the target user Identifying corresponding individual scene includes two or more individual scenes, identifies corresponding power according to each target user Weight and described two or more than two individual scenes are combined, and obtain combination individual scene;According to the environmental information The combination individual scene is adjusted, the final individual scene is obtained.
11. device according to claim 9, which is characterized in that the processing module, also particularly useful for according to the environment Weather information and/or season information in information identify the temperature equipment in corresponding individual scene to the target user Parameter be adjusted, obtain the final individual scene, the temperature equipment is for adjusting room temperature;And/or according to Temporal information in the environmental information identifies the parameter of the lighting apparatus in corresponding individual scene to the target user It is adjusted, obtains the final individual scene.
12. device according to claim 9, which is characterized in that described device further include:
Second obtains module, for obtaining at least one voice messaging;
Judgment module, for judging whether each voice messaging includes waking up the corresponding wake-up voice messaging of word;
Determining module, if including the corresponding wake-up voice messaging of the wake-up word for the voice messaging, by the wake-up language Message breath is used as the voiceprint.
13. device according to claim 9, which is characterized in that described device further include:
Generation module, if for every in any one voiceprint, with the vocal print library at least one described voiceprint A sample vocal print mismatches, then generates prompt information, the prompt information is for prompting user's typing voiceprint;
Receiving module, for receiving the voiceprint of user's typing;
Second matching module, for by each sample vocal print in the voiceprint of user's typing and the vocal print library again It is matched, determines the corresponding target user's mark of the voiceprint of user's typing.
14. device according to claim 13, which is characterized in that described device further include:
Third obtains module, if the voiceprint for not receiving user's typing in the first preset duration, obtains Default scene;
Second adjustment module, for being adjusted to the current scene according to the default scene.
15. according to any device of claim 9 to 14, which is characterized in that described device further include:
First update module, if for detecting that the Weather information updates, according to updated Weather information to described final The corresponding data of individual scene are updated, and obtain updated final individual scene;
Third adjusts module, for being corresponded to the final individual scene according to the updated final individual scene Scene be adjusted.
16. according to any device of claim 9 to 14, which is characterized in that described device further include:
4th obtains module, for after the second preset duration, obtains the corresponding data of at least one final individual scene;
Second update module, it is corresponding to each individual scene for the corresponding data of each final individual scene of basis Data are updated.
CN201811451665.9A 2018-11-30 2018-11-30 Method for changing scenes and device Pending CN109597313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811451665.9A CN109597313A (en) 2018-11-30 2018-11-30 Method for changing scenes and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811451665.9A CN109597313A (en) 2018-11-30 2018-11-30 Method for changing scenes and device

Publications (1)

Publication Number Publication Date
CN109597313A true CN109597313A (en) 2019-04-09

Family

ID=65959214

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811451665.9A Pending CN109597313A (en) 2018-11-30 2018-11-30 Method for changing scenes and device

Country Status (1)

Country Link
CN (1) CN109597313A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989391A (en) * 2019-12-25 2020-04-10 珠海格力电器股份有限公司 Method and device for controlling equipment, control equipment and storage medium
CN111276144A (en) * 2020-02-21 2020-06-12 北京声智科技有限公司 Platform matching method, device, equipment and medium
CN111578465A (en) * 2020-04-27 2020-08-25 青岛海尔空调器有限总公司 Intelligent adjusting method and system for indoor environment
CN112104533A (en) * 2020-09-14 2020-12-18 深圳Tcl数字技术有限公司 Scene switching method, terminal and storage medium
CN112506070A (en) * 2020-12-16 2021-03-16 珠海格力电器股份有限公司 Control method and device of intelligent household equipment
CN113268004A (en) * 2021-04-22 2021-08-17 深圳Tcl新技术有限公司 Scene creating method and device, computer equipment and storage medium
CN113341743A (en) * 2021-06-07 2021-09-03 深圳市欧瑞博科技股份有限公司 Intelligent household equipment control method and device, electronic equipment and storage medium
CN113488041A (en) * 2021-06-28 2021-10-08 青岛海尔科技有限公司 Method, server and information recognizer for scene recognition
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104990213A (en) * 2015-06-29 2015-10-21 广东美的制冷设备有限公司 Method and system for cooperatively controlling air conditioner in multi-user environment
CN105180381A (en) * 2015-10-22 2015-12-23 珠海格力电器股份有限公司 Air conditioning control method and equipment
CN105371425A (en) * 2015-10-12 2016-03-02 美的集团股份有限公司 Air conditioner
CN105467854A (en) * 2016-01-06 2016-04-06 北京京东尚科信息技术有限公司 Scene information-based device operation method and scene information-based device operation device
CN106249607A (en) * 2016-07-28 2016-12-21 桂林电子科技大学 Virtual Intelligent household analogue system and method
CN106254677A (en) * 2016-09-19 2016-12-21 深圳市金立通信设备有限公司 A kind of scene mode setting method and terminal
CN106462124A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 Method, system and robot for identifying and controlling household appliances based on intention
CN106647311A (en) * 2017-01-16 2017-05-10 上海智臻智能网络科技股份有限公司 Intelligent central control system and equipment, server and intelligent equipment control method
CN107454260A (en) * 2017-08-03 2017-12-08 深圳天珑无线科技有限公司 Terminal enters control method, electric terminal and the storage medium of certain scenarios pattern
CN106054644B (en) * 2016-06-30 2017-12-22 慧锐通智能科技股份有限公司 A kind of intelligent home furnishing control method and system
CN107544266A (en) * 2016-06-28 2018-01-05 广州零号软件科技有限公司 Health Care Services robot
CN107680600A (en) * 2017-09-11 2018-02-09 平安科技(深圳)有限公司 Sound-groove model training method, audio recognition method, device, equipment and medium
CN107678285A (en) * 2017-08-21 2018-02-09 珠海格力电器股份有限公司 Device intelligence mobile controller, system, method and smart machine
CN107748500A (en) * 2017-10-10 2018-03-02 三星电子(中国)研发中心 Method and apparatus for controlling smart machine
US20180081876A1 (en) * 2016-09-16 2018-03-22 Kabushiki Kaisha Toshiba Information management system
CN108153158A (en) * 2017-12-19 2018-06-12 美的集团股份有限公司 Switching method, device, storage medium and the server of household scene
US20180308475A1 (en) * 2017-04-20 2018-10-25 Tyco Fire & Security Gmbh Artificial Intelligence and Natural Language Processing Based Building and Fire Systems Management System

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104990213A (en) * 2015-06-29 2015-10-21 广东美的制冷设备有限公司 Method and system for cooperatively controlling air conditioner in multi-user environment
CN105371425A (en) * 2015-10-12 2016-03-02 美的集团股份有限公司 Air conditioner
CN105180381A (en) * 2015-10-22 2015-12-23 珠海格力电器股份有限公司 Air conditioning control method and equipment
CN105467854A (en) * 2016-01-06 2016-04-06 北京京东尚科信息技术有限公司 Scene information-based device operation method and scene information-based device operation device
CN107544266A (en) * 2016-06-28 2018-01-05 广州零号软件科技有限公司 Health Care Services robot
CN106054644B (en) * 2016-06-30 2017-12-22 慧锐通智能科技股份有限公司 A kind of intelligent home furnishing control method and system
CN106462124A (en) * 2016-07-07 2017-02-22 深圳狗尾草智能科技有限公司 Method, system and robot for identifying and controlling household appliances based on intention
CN106249607A (en) * 2016-07-28 2016-12-21 桂林电子科技大学 Virtual Intelligent household analogue system and method
US20180081876A1 (en) * 2016-09-16 2018-03-22 Kabushiki Kaisha Toshiba Information management system
CN106254677A (en) * 2016-09-19 2016-12-21 深圳市金立通信设备有限公司 A kind of scene mode setting method and terminal
CN106647311A (en) * 2017-01-16 2017-05-10 上海智臻智能网络科技股份有限公司 Intelligent central control system and equipment, server and intelligent equipment control method
US20180308475A1 (en) * 2017-04-20 2018-10-25 Tyco Fire & Security Gmbh Artificial Intelligence and Natural Language Processing Based Building and Fire Systems Management System
CN107454260A (en) * 2017-08-03 2017-12-08 深圳天珑无线科技有限公司 Terminal enters control method, electric terminal and the storage medium of certain scenarios pattern
CN107678285A (en) * 2017-08-21 2018-02-09 珠海格力电器股份有限公司 Device intelligence mobile controller, system, method and smart machine
CN107680600A (en) * 2017-09-11 2018-02-09 平安科技(深圳)有限公司 Sound-groove model training method, audio recognition method, device, equipment and medium
CN107748500A (en) * 2017-10-10 2018-03-02 三星电子(中国)研发中心 Method and apparatus for controlling smart machine
CN108153158A (en) * 2017-12-19 2018-06-12 美的集团股份有限公司 Switching method, device, storage medium and the server of household scene

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110989391A (en) * 2019-12-25 2020-04-10 珠海格力电器股份有限公司 Method and device for controlling equipment, control equipment and storage medium
CN111276144A (en) * 2020-02-21 2020-06-12 北京声智科技有限公司 Platform matching method, device, equipment and medium
CN111578465A (en) * 2020-04-27 2020-08-25 青岛海尔空调器有限总公司 Intelligent adjusting method and system for indoor environment
CN111578465B (en) * 2020-04-27 2021-09-21 青岛海尔空调器有限总公司 Intelligent adjusting method and system for indoor environment
CN112104533A (en) * 2020-09-14 2020-12-18 深圳Tcl数字技术有限公司 Scene switching method, terminal and storage medium
CN112104533B (en) * 2020-09-14 2023-02-17 深圳Tcl数字技术有限公司 Scene switching method, terminal and storage medium
CN112506070A (en) * 2020-12-16 2021-03-16 珠海格力电器股份有限公司 Control method and device of intelligent household equipment
CN113268004A (en) * 2021-04-22 2021-08-17 深圳Tcl新技术有限公司 Scene creating method and device, computer equipment and storage medium
CN113341743A (en) * 2021-06-07 2021-09-03 深圳市欧瑞博科技股份有限公司 Intelligent household equipment control method and device, electronic equipment and storage medium
CN113341743B (en) * 2021-06-07 2023-11-28 深圳市欧瑞博科技股份有限公司 Smart home equipment control method and device, electronic equipment and storage medium
CN113488041A (en) * 2021-06-28 2021-10-08 青岛海尔科技有限公司 Method, server and information recognizer for scene recognition
CN117014247A (en) * 2023-08-28 2023-11-07 广东金朋科技有限公司 Scene generation method, system and storage medium based on state learning

Similar Documents

Publication Publication Date Title
CN109597313A (en) Method for changing scenes and device
CN106773742B (en) Sound control method and speech control system
CN104820675B (en) Photograph album display methods and device
CN104394491B (en) A kind of intelligent earphone, Cloud Server and volume adjusting method and system
CN110336723A (en) Control method and device, the intelligent appliance equipment of intelligent appliance
CN106452987B (en) A kind of sound control method and device, equipment
CN107483493A (en) Interactive calendar prompting method, device, storage medium and intelligent domestic system
CN109637548A (en) Voice interactive method and device based on Application on Voiceprint Recognition
CN106679321A (en) Intelligent refrigerator food management method and intelligent refrigerator
CN106782526A (en) Sound control method and device
CN111508483A (en) Equipment control method and device
CN109547308A (en) A kind of control method of smart home, device, storage medium and server
CN109688036A (en) A kind of control method of intelligent appliance, device, intelligent appliance and storage medium
CN106647311A (en) Intelligent central control system and equipment, server and intelligent equipment control method
WO2015158086A1 (en) Mobile terminal and method of processing loadable content
CN107688329A (en) Intelligent home furnishing control method and intelligent home control system
KR20170048238A (en) Method, apparatus and device for changing display background
CN106412313A (en) Method and system for automatically adjusting screen display parameters, and intelligent terminal
AU2019259066B2 (en) Photographic method and terminal device
CN110333840A (en) Recommended method, device, electronic equipment and storage medium
CN108847216A (en) Method of speech processing and electronic equipment, storage medium
US20170013111A1 (en) Intelligent notification device and intelligent notification method
WO2022160865A1 (en) Control method, system, and apparatus for air conditioner, and air conditioner
CN108932947B (en) Voice control method and household appliance
CN108063701B (en) Method and device for controlling intelligent equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190409