CN106254677A - A kind of scene mode setting method and terminal - Google Patents
A kind of scene mode setting method and terminal Download PDFInfo
- Publication number
- CN106254677A CN106254677A CN201610831740.9A CN201610831740A CN106254677A CN 106254677 A CN106254677 A CN 106254677A CN 201610831740 A CN201610831740 A CN 201610831740A CN 106254677 A CN106254677 A CN 106254677A
- Authority
- CN
- China
- Prior art keywords
- sound
- contextual model
- personage
- voice characteristics
- characteristics data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000000605 extraction Methods 0.000 claims abstract description 43
- 238000010168 coupling process Methods 0.000 claims abstract description 40
- 238000005859 coupling reaction Methods 0.000 claims abstract description 40
- 230000008878 coupling Effects 0.000 claims abstract description 39
- 239000000284 extract Substances 0.000 claims abstract description 38
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 9
- 230000004044 response Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000002618 waking effect Effects 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification techniques
- G10L17/22—Interactive procedures; Man-machine interfaces
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
- H04M1/72454—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Environmental & Geological Engineering (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The embodiment of the invention discloses a kind of scene mode setting method and terminal, wherein, described method includes: gather ambient sound, extracts personage's voice characteristics data from described ambient sound;The sound that the personage's voice characteristics data with described extraction matches is searched from preset sound storehouse;If finding the sound that the personage's voice characteristics data with described extraction matches, determine the contextual model that the sound of described coupling is corresponding;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note;Current contextual model is set to the described contextual model determined.Embodiment of the present invention terminal can improve Consumer's Experience.
Description
Technical field
The present invention relates to electronic technology field, particularly relate to a kind of scene mode setting method and terminal.
Background technology
At present, terminal, when receiving phone or information, can be reminded by the pattern that user is the most manually arranged.Ratio
As, jingle bell, jingle bell add vibrations, quiet isotype reminds user to check incoming call or information.
But, when terminal use participates in the activity etc. of meeting or more serious, if user forgets terminal from jingle bell mould
Formula is adjusted to silent mode or vibrating mode, when terminal receives incoming call requests, it will the situation of jingle bell occur, and Consumer's Experience is not
Good.
Summary of the invention
The embodiment of the present invention provides a kind of scene mode setting method and terminal, it is possible to increase Consumer's Experience.
First aspect, embodiments provides a kind of scene mode setting method, and the method includes:
Gather ambient sound, from described ambient sound, extract personage's voice characteristics data;Search from preset sound storehouse
The sound matched with personage's voice characteristics data of described extraction;
If finding the sound that the personage's voice characteristics data with described extraction matches, determine the sound pair of described coupling
The contextual model answered;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note;
Current contextual model is set to the described contextual model determined.
On the other hand, embodiments providing a kind of terminal, this terminal includes:
Extraction unit, is used for gathering ambient sound, extracts personage's voice characteristics data from described ambient sound;
Search unit, for searching the sound that the personage's voice characteristics data with described extraction matches from preset sound storehouse
Sound;
Determine unit, if for finding the sound that the personage's voice characteristics data with described extraction matches, determining institute
State the contextual model that the sound of coupling is corresponding;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note;
Unit is set, for current contextual model is set to the described contextual model determined.
The embodiment of the present invention, by gathering ambient sound, extracts personage's voice characteristics data from ambient sound;From presetting
The sound that the personage's voice characteristics data searched in voice bank and extract matches;If the personage's sound characteristic finding and extracting
The sound of data match, determines the contextual model that the sound of coupling is corresponding;It is set to current contextual model described determine
Contextual model.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to increase
Consumer's Experience.
Accompanying drawing explanation
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing is briefly described, it should be apparent that, the accompanying drawing in describing below is some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is the schematic flow diagram of a kind of scene mode setting method that the embodiment of the present invention provides;
Fig. 2 is the schematic flow diagram of a kind of scene mode setting method that another embodiment of the present invention provides;
Fig. 3 is the schematic block diagram of a kind of terminal that the embodiment of the present invention provides;
Fig. 4 is a kind of terminal schematic block diagram that another embodiment of the present invention provides;
Fig. 5 is a kind of terminal schematic block diagram that yet another embodiment of the invention provides.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is a part of embodiment of the present invention rather than whole embodiments wholely.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under not making creative work premise
Example, broadly falls into the scope of protection of the invention.
Should be appreciated that when using in this specification and in the appended claims, term " includes " and " comprising " instruction
Described feature, entirety, step, operation, element and/or the existence of assembly, but it is not precluded from one or more further feature, whole
Body, step, operation, element, assembly and/or the existence of its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicating other situation, otherwise " ", " " and " being somebody's turn to do " of singulative is intended to include plural form.
It will be further appreciated that, the term "and/or" used in description of the invention and appended claims is
Refer to the one or more any combination being associated in the item listed and likely combine, and including that these combine.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if be detected that [described condition or event] " can be interpreted to mean according to context " once it is determined that " or " in response to really
Fixed " or " [described condition or event] once being detected " or " in response to [described condition or event] being detected ".
In implementing, the terminal described in the embodiment of the present invention includes but not limited to such as have touch sensitive surface
Mobile phone, laptop computer or the tablet PC of (such as, touch-screen display and/or touch pad) etc other just
Portable device.It is to be further understood that in certain embodiments, described equipment not portable communication device, but have tactile
Touch the desk computer of sensing surface (such as, touch-screen display and/or touch pad).
In discussion below, describe the terminal including display and touch sensitive surface.It is, however, to be understood that
It is that terminal can include such as physical keyboard, mouse and/or control other physical user-interface device one or more of bar.
Terminal supports various application programs, such as following in one or more: drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, dish imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support the application of application program, photo management application program, digital camera application program, digital camera application program, web-browsing
Program, digital music player application and/or video frequency player application program.
The various application programs that can perform in terminal can use at least one of such as touch sensitive surface public
Physical user-interface device.Among applications and/or can adjust in corresponding application programs and/or change and touch sensitive table
The corresponding information of display in one or more functions in face and terminal.So, the public physical structure of terminal (such as, touches
Sensing surface) the various application programs with the most directly perceived and transparent user interface can be supported.
Refer to the schematic flow diagram that Fig. 1, Fig. 1 are a kind of scene mode setting methods that the embodiment of the present invention provides.This
In embodiment, the executive agent of scene mode setting method is terminal.Terminal can be the mobile terminal such as mobile phone, panel computer, but
It is not limited to this, it is also possible to for other-end.Scene mode setting method as shown in Figure 1 can comprise the following steps that
S101: gather ambient sound, extracts personage's voice characteristics data from described ambient sound.
When user needs to participate in more serious movable such as meeting, open the function that contextual model is set automatically, wherein, use
Family can automatically arrange the function of contextual model by arranging interface unlatching.
Terminal, when detecting that the predetermined registration operation automatically arranging contextual model is opened in triggering, gathers ambient sound, from collection
To ambient sound in extract personage's voice characteristics data.
Wherein, terminal can be with Real-time Collection, it is also possible to every Preset Time collection, not limiting, Preset Time can
Think 1 minute, it is also possible to be configured according to actual needs.
Personage's voice characteristics data includes frequency of sound wave, and personage's voice characteristics data of extraction can be personage's sound
Characteristic, it is also possible to at least two personage's voice characteristics data.
S102: search the sound that the personage's voice characteristics data with described extraction matches from preset sound storehouse.
Wherein, terminal has prestored one in preset sound storehouse or at least two difference presets the sound of personage.In advance
If personage can include leader, affiliate, household and/or friend etc., do not limit.
If the sound that personage's voice characteristics data that terminal finds and extracts matches, perform step S103;Otherwise, return
Return step S101.
S103: if finding the sound that the personage's voice characteristics data with described extraction matches, determine described coupling
The contextual model that sound is corresponding;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note.
Terminal, when the sound that the personage's voice characteristics data finding and extracting matches, determines that the sound of coupling is corresponding
Contextual model.Wherein, contextual model is for identifying incoming call or the alerting pattern of note.
The contextual model corresponding to sound of coupling can include vibrating mode, bell mode, vibrations and bell mode, quiet
Pattern, quiet and bright screen pattern or self-defined pattern etc., but it is not limited to this.
Vibrating mode carries out sending a telegram here or note prompting by vibrations for mark, and bell mode is entered by jingle bell for mark
Row incoming call or note are reminded, and vibrations and bell mode carry out sending a telegram here by the way of vibrations add jingle bell for mark or note carries
Waking up, silent mode does not send the alerting pattern of any sound, quiet and bright screen when mark receives incoming call requests or information
Pattern is used for when expression receives incoming call requests or information by lighting display screen prompting user.
S104: current contextual model is set to the described contextual model determined.
Terminal current contextual model is set to in preset sound storehouse, personage's voice characteristics data coupling sound pair
The contextual model answered.
Such scheme, terminal gathers ambient sound, extracts personage's voice characteristics data from ambient sound;From preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches;If the personage's voice characteristics data finding and extracting
The sound matched, determines the contextual model that the sound of coupling is corresponding;Current contextual model is set to the described feelings determined
Scape pattern.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to avoid spy
Problem that different occasion causes because of inappropriate setting and embarrassment.
Refer to the exemplary flow that Fig. 2, Fig. 2 are a kind of scene mode setting methods that another embodiment of the present invention provides
Figure.In the present embodiment, the executive agent of scene mode setting method is terminal.Terminal can be that mobile phone, panel computer etc. are mobile eventually
End, but it is not limited to this, it is also possible to for other-end.Scene mode setting method as shown in Figure 2 can comprise the following steps that
S201: gather ambient sound, extracts personage's voice characteristics data from described ambient sound.
When user needs to participate in more serious movable such as meeting, open the function that contextual model is set automatically, wherein, use
Family can automatically arrange the function of contextual model by arranging interface unlatching.
Terminal, when detecting that the predetermined registration operation automatically arranging contextual model is opened in triggering, gathers ambient sound, from collection
To ambient sound in extract personage's voice characteristics data.
Wherein, terminal can be with Real-time Collection, it is also possible to every Preset Time collection, do not limit.
Personage's voice characteristics data includes frequency of sound wave, and personage's voice characteristics data of extraction can be personage's sound
Characteristic, it is also possible to at least two personage's voice characteristics data.
Further, step S201 can be: gathers ambient sound every preset time period, extracts from institute's ambient sound
Personage's voice characteristics data.
Preset time period can be 1 minute, 5 minutes or 10 minutes, but is not limited to this, specifically can enter according to actual needs
Row is arranged.
S202: search the sound that the personage's voice characteristics data with described extraction matches from preset sound storehouse.
Wherein, terminal has prestored one in preset sound storehouse or at least two difference presets the sound of personage.In advance
If personage can include leader, affiliate, household and/or friend etc., do not limit.
Terminal according to user operation, can also pre-build the default corresponding relation of sound and contextual model, and is saved in
In preset sound storehouse.
If the sound that personage's voice characteristics data that terminal finds and extracts matches, perform step S2031 or execution
Step S2032;Otherwise, step S201 is returned.Wherein, in the sound of the default personage comprised when preset sound storehouse, have two pre-
If during contextual model difference corresponding to the sound of personage, perform step S2031;Sound as default personage each in preset sound storehouse
During sound all corresponding silent modes, perform step S2032.
S2031: if finding the sound that the personage's voice characteristics data with described extraction matches, according to sound and sight
The default corresponding relation of pattern, determines the contextual model that the sound of described coupling is corresponding.
In the sound of the default personage comprised when preset sound storehouse, there is the contextual model that the sound of two default personages is corresponding
Difference, terminal is when the sound that the personage's voice characteristics data finding and extracting matches, according to sound and contextual model
Preset corresponding relation, determine the contextual model that the sound of coupling is corresponding.Wherein, contextual model is for identifying incoming call or the carrying of note
The mode of waking up.
The contextual model corresponding to sound of coupling can include vibrating mode, bell mode, vibrations and bell mode, quiet
Pattern or quiet and bright screen pattern etc., but it is not limited to this.
Vibrating mode carries out sending a telegram here or note prompting by vibrations for mark, and bell mode is entered by jingle bell for mark
Row incoming call or note are reminded, and vibrations and bell mode carry out sending a telegram here by the way of vibrations add jingle bell for mark or note carries
Waking up, silent mode does not send the alerting pattern of any sound, quiet and bright screen when mark receives incoming call requests or information
Pattern is used for when expression receives incoming call requests or information by lighting display screen prompting user.
If it is understood that the sound that the personage's voice characteristics data finding and extracting matches is at least two,
And during the corresponding different contextual model of the sound of at least two coupling, can be by contextual model corresponding for first sound mated
It is identified as target context pattern, it is also possible to be adjusted to non-bell mode/non-jingle bell and vibrating mode.Wherein, target context pattern
For contextual model to be placed.
S2032: if finding the sound that the personage's voice characteristics data with described extraction matches, determine described coupling
The contextual model that sound is corresponding is silent mode.
When all corresponding silent mode of the sound of default personage each in preset sound storehouse, and the people that terminal finds and extracts
During the sound that thing voice characteristics data matches, determine that the contextual model that the sound of coupling is corresponding is silent mode.Wherein, sight
Pattern is for identifying incoming call or the alerting pattern of note.
S204: judge that current contextual model is the most identical with the described contextual model determined.
Terminal, when determining contextual model corresponding to personage's voice characteristics data that ambient sound comprises, obtains current feelings
Scape pattern, and judge that current contextual model is the most identical with the contextual model determined.
If current contextual model is different from the contextual model determined, perform step S205.If current contextual model with
The contextual model determined is identical, then be left intact, and returns step S201.
S205: if described current contextual model is different from the described contextual model determined, by described current sight mould
Formula is set to the described contextual model determined.
Terminal is different, by current in the contextual model that the contextual model that confirmation is current is corresponding from the sound of the coupling determined
Contextual model switch to in preset sound storehouse, the contextual model corresponding to sound of personage's voice characteristics data coupling.
Wherein, when all corresponding silent mode of the sound of default personage each in preset sound storehouse, and terminal finds and carries
The sound that the personage's voice characteristics data taken matches, terminal determines that the contextual model that the sound of coupling is corresponding is silent mode
Time, step S205 may include that and current contextual model is set to silent mode, and keeps Preset Time.
Current contextual model is set to silent mode and keeps 15 minutes (but being not limited to this, Preset Time by terminal
Can also be other values), i.e. open silent mode 15 minutes.Afterwards, just it is configured by the contextual model determined.
Such scheme, terminal gathers ambient sound, extracts personage's voice characteristics data from ambient sound;From preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches;If the personage's voice characteristics data finding and extracting
The sound matched, determines the contextual model that the sound of coupling is corresponding;Current contextual model is set to the described feelings determined
Scape pattern.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to avoid spy
Problem that different occasion causes because of inappropriate setting and embarrassment.
Terminal can arrange contextual model by the default corresponding relation of sound Yu contextual model, it is possible to handover situations flexibly
Pattern.
See the schematic block diagram that Fig. 3, Fig. 3 are a kind of terminals that the embodiment of the present invention provides.Terminal can be mobile phone, put down
The mobile terminals such as plate computer, but it is not limited to this, it is also possible to for other-end, do not limit.The terminal 300 of the present embodiment
Including each unit for performing each step in embodiment corresponding to Fig. 1, specifically refer to enforcement corresponding to Fig. 1 and Fig. 1
Associated description in example, does not repeats.The terminal of the present embodiment includes: extraction unit 310, search unit 320, determine unit
330 and unit 340 is set.
Extraction unit 310 is used for gathering ambient sound, extracts personage's voice characteristics data from ambient sound.Such as, carry
Take unit 310 and gather ambient sound, from ambient sound, extract personage's voice characteristics data.The people that extraction unit 310 will extract
Thing voice characteristics data sends to searching unit 320.
Search unit 320 and be used for receiving personage's voice characteristics data of the extraction that extraction unit 310 sends, from preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches.Such as, search unit 320 and receive extraction unit 310
Personage's voice characteristics data of the extraction sent, searches from preset sound storehouse and matches with the personage's voice characteristics data extracted
Sound.
Search unit 320 by lookup result to determining that unit 330 sends.
Determine that unit 330 searches the lookup result that unit 320 sends, if lookup result is for finding and extracting for receiving
The sound that matches of personage's voice characteristics data, determine the contextual model that the sound of coupling is corresponding;Wherein, contextual model is used for
Mark incoming call or the alerting pattern of note.
Such as, determine that unit 330 receives and search the lookup result that unit 320 sends, if lookup result is for finding and carrying
The sound that the personage's voice characteristics data taken matches, determines the contextual model that the sound of coupling is corresponding;Wherein, contextual model is used
In mark incoming call or the alerting pattern of note.Determine unit 330 by the information of the contextual model determined to arranging unit 340
Send.
Unit 340 is set for receiving the information of the contextual model of the determination determining that unit 330 sends, by current sight
Pattern is set to the contextual model determined.
Such as, the information that unit 340 receives the contextual model of the determination determining that unit 330 sends is set, by current feelings
Scape pattern is set to the contextual model determined.
Such scheme, terminal gathers ambient sound, extracts personage's voice characteristics data from ambient sound;From preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches;If the personage's voice characteristics data finding and extracting
The sound matched, determines the contextual model that the sound of coupling is corresponding;Current contextual model is set to the described feelings determined
Scape pattern.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to avoid spy
Problem that different occasion causes because of inappropriate setting and embarrassment.
See the schematic block diagram that Fig. 4, Fig. 4 are a kind of terminals that another embodiment of the present invention provides.Terminal can be hands
The mobile terminal such as machine, panel computer, but it is not limited to this, it is also possible to for other-end, do not limit.The end of the present embodiment
The each unit that end 400 includes, for performing each step in embodiment corresponding to Fig. 2, specifically refers to Fig. 2 and Fig. 2 corresponding
Associated description in embodiment, does not repeats.The terminal of the present embodiment includes: extraction unit 410, search unit 420, determine
Unit 430 and unit 440 is set.Wherein, unit 440 is set and includes judging unit 441 and switch unit 442.
Extraction unit 410 is used for gathering ambient sound, extracts personage's voice characteristics data from ambient sound.Such as, carry
Take unit 410 and gather ambient sound, from ambient sound, extract personage's voice characteristics data.
Alternatively, extraction unit 410 is specifically for gathering ambient sound every preset time period, from described ambient sound
Extract personage's voice characteristics data.
Personage's voice characteristics data of extraction is sent by extraction unit 410 to searching unit 420.
Search unit 420 and be used for receiving personage's voice characteristics data of the extraction that extraction unit 410 sends, from preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches.Such as, search unit 420 and receive extraction unit 410
Personage's voice characteristics data of the extraction sent, searches from preset sound storehouse and matches with the personage's voice characteristics data extracted
Sound.
Search unit 420 by lookup result to determining that unit 430 sends.
Determine that unit 430 searches the lookup result that unit 420 sends, if lookup result is for finding and extracting for receiving
The sound that matches of personage's voice characteristics data, determine the contextual model that the sound of coupling is corresponding;Wherein, contextual model is used for
Mark incoming call or the alerting pattern of note.
Such as, determine that unit 430 receives and search the lookup result that unit 420 sends, if lookup result is for finding and carrying
The sound that the personage's voice characteristics data taken matches, determines the contextual model that the sound of coupling is corresponding;Wherein, contextual model is used
In mark incoming call or the alerting pattern of note.
If optionally it is determined that the sound that unit 430 matches specifically for the personage's voice characteristics data finding and extracting
Sound, according to the default corresponding relation of sound Yu contextual model, determines the contextual model that the sound of coupling is corresponding.
If optionally it is determined that the sound that unit 430 matches specifically for the personage's voice characteristics data finding and extracting
Sound, determines that the contextual model that the sound of coupling is corresponding is silent mode.
Determine that the information of the contextual model determined is sent by unit 430 to arranging unit 440.
The judging unit 441 of unit 440 is set for receiving the letter of the contextual model of the determination determining that unit 430 sends
Breath, it is judged that current contextual model is the most identical with the contextual model determined.
If it is different from the contextual model determined that switch unit 442 judges current contextual model for judging unit 441, will
Current contextual model is set to the contextual model determined.
Current contextual model is set to the contextual model determined.
Wherein, if when the sound determining that personage's voice characteristics data that unit 430 finds and extracts matches, determine
During contextual model corresponding to the sound joined, switch unit 442 specifically for current contextual model is set to silent mode, and
Keep Preset Time.
Such scheme, terminal gathers ambient sound, extracts personage's voice characteristics data from ambient sound;From preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches;If the personage's voice characteristics data finding and extracting
The sound matched, determines the contextual model that the sound of coupling is corresponding;Current contextual model is set to the described feelings determined
Scape pattern.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to avoid spy
Problem that different occasion causes because of inappropriate setting and embarrassment.
Terminal can arrange contextual model by the default corresponding relation of sound Yu contextual model, it is possible to handover situations flexibly
Pattern.
Seeing Fig. 5, Fig. 5 is a kind of terminal schematic block diagram that yet another embodiment of the invention provides.This enforcement as depicted
Terminal 500 in example may include that one or more processor 510;One or more input equipments 520, one or more defeated
Go out equipment 530 and memorizer 540.Above-mentioned processor 510, input equipment 520, outut device 530 and memorizer 540 pass through bus
550 connect.
Memorizer 540 is used for storing programmed instruction.
Processor 510 operation below performing according to the programmed instruction of memorizer 540 storage:
Processor 510 is used for gathering ambient sound, extracts personage's voice characteristics data from described ambient sound.
Processor 510 is additionally operable to from preset sound storehouse search what the personage's voice characteristics data with described extraction matched
Sound.
If processor 510 is additionally operable to find the sound that the personage's voice characteristics data with described extraction matches, determine
The contextual model that the sound of described coupling is corresponding;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note.
Processor 510 is additionally operable to be set to current contextual model the described contextual model determined.
Further, if processor 510 matches specifically for finding the personage's voice characteristics data with described extraction
Sound, according to the default corresponding relation of sound Yu contextual model, determine the contextual model that the sound of described coupling is corresponding.
Further, if processor 510 matches specifically for finding the personage's voice characteristics data with described extraction
Sound, determine that the contextual model that the sound of described coupling is corresponding is silent mode;And for current contextual model is set
It is set to silent mode, and keeps Preset Time.
Further, processor 510 is specifically for gathering ambient sound every preset time period, from described ambient sound
Extract personage's voice characteristics data.
Further, processor 510 be additionally operable to judge current contextual model whether with the described contextual model phase determined
With;If described current contextual model is different from the described contextual model determined, described current contextual model is set to institute
State the contextual model determined.
Such scheme, terminal gathers ambient sound, extracts personage's voice characteristics data from ambient sound;From preset sound
The sound that the personage's voice characteristics data searched in storehouse and extract matches;If the personage's voice characteristics data finding and extracting
The sound matched, determines the contextual model that the sound of coupling is corresponding;Current contextual model is set to the described feelings determined
Scape pattern.Terminal arranges current contextual model according to other people sound in addition to this terminal use, it is possible to avoid spy
Problem that different occasion causes because of inappropriate setting and embarrassment.
Terminal can arrange contextual model by the default corresponding relation of sound Yu contextual model, it is possible to handover situations flexibly
Pattern.
Should be appreciated that in embodiments of the present invention, alleged processor 510 can be CPU (Central
Processing Unit, CPU), this processor can also is that other general processors, digital signal processor (Digital
Signal Processor, DSP), special IC (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device can also be the processor etc. of any routine.
Input equipment 520 can include that Trackpad, fingerprint adopt sensor (for gathering the finger print information of user and fingerprint
Directional information), mike etc., outut device 530 can include display (LCD etc.), speaker etc..
This memorizer 540 can include read only memory and random access memory, and to processor 510 provide instruction and
Data.A part for memorizer 540 can also include nonvolatile RAM.Such as, memorizer 540 can also be deposited
The information of storage device type.
In implementing, processor 510, input equipment 520 described in the embodiment of the present invention, outut device 530 can
Realization described in the first embodiment of the scene mode setting method that the execution embodiment of the present invention provides and the second embodiment
Mode, it is possible to the implementation of execution terminal described by the embodiment of the present invention, does not repeats them here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example that the embodiments described herein describes
Unit and algorithm steps, it is possible to electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware
With the interchangeability of software, the most generally describe composition and the step of each example according to function.This
A little functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Specially
Industry technical staff can use different methods to realize described function to each specifically should being used for, but this realization is not
It is considered as beyond the scope of this invention.
Those skilled in the art is it can be understood that arrive, for convenience of description and succinctly, and the end of foregoing description
End and the specific works process of unit, be referred to the corresponding process in preceding method embodiment, do not repeat them here.
In several embodiments provided herein, it should be understood that disclosed terminal and method, can be passed through it
Its mode realizes.Such as, device embodiment described above is only schematically, such as, and the division of described unit, only
Being only a kind of logic function to divide, actual can have other dividing mode, the most multiple unit or assembly to tie when realizing
Close or be desirably integrated into another system, or some features can be ignored, or not performing.It addition, shown or discussed phase
Coupling between Hu or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication
Connect, it is also possible to be electric, machinery or other form connect.
Step in embodiment of the present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
The described unit illustrated as separating component can be or may not be physically separate, shows as unit
The parts shown can be or may not be physical location, i.e. may be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to be that two or more unit are integrated in a unit.Above-mentioned integrated
Unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
If described integrated unit realizes and as independent production marketing or use using the form of SFU software functional unit
Time, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art contributed, or this technical scheme completely or partially can be with the form of software product
Embodying, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or the network equipment etc.) performs the complete of method described in each embodiment of the present invention
Portion or part steps.And aforesaid storage medium includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above, the only detailed description of the invention of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art, in the technical scope that the invention discloses, can readily occur in the amendment of various equivalence or replace
Changing, these amendments or replacement all should be contained within protection scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain required is as the criterion.
Claims (10)
1. a scene mode setting method, it is characterised in that described method includes:
Gather ambient sound, from described ambient sound, extract personage's voice characteristics data;Search and institute from preset sound storehouse
State the sound that personage's voice characteristics data of extraction matches;
If finding the sound that the personage's voice characteristics data with described extraction matches, determine that the sound of described coupling is corresponding
Contextual model;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note;
Current contextual model is set to the described contextual model determined.
Method the most according to claim 1, it is characterised in that if the described personage's sound characteristic found with described extraction
The sound of data match, determines that the contextual model that the sound of described coupling is corresponding includes:
According to the default corresponding relation of sound Yu contextual model, determine the contextual model that the sound of described coupling is corresponding.
Method the most according to claim 1, it is characterised in that if the described personage's sound characteristic found with described extraction
The sound of data match, determines that the contextual model that the sound of described coupling is corresponding includes:
If finding the sound that the personage's voice characteristics data with described extraction matches, determine that the sound of described coupling is corresponding
Contextual model is silent mode;
Described current contextual model be set to the described contextual model determined include:
Current contextual model is set to silent mode, and keeps Preset Time.
Method the most according to claim 2, it is characterised in that described collection ambient sound, carries from described ambient sound
Take personage's voice characteristics data to include:
Gather ambient sound every preset time period, from described ambient sound, extract personage's voice characteristics data.
5. according to the method described in any one of Claims 1-4, it is characterised in that described current alerting pattern is set to
The described contextual model determined includes:
Judge that current contextual model is the most identical with the described contextual model determined;
If described current contextual model is different from the described contextual model determined, described current contextual model is set to institute
State the contextual model determined.
6. a terminal, it is characterised in that described terminal includes:
Extraction unit, is used for gathering ambient sound, extracts personage's voice characteristics data from described ambient sound;
Search unit, for searching the sound that the personage's voice characteristics data with described extraction matches from preset sound storehouse;
Determine unit, if for finding the sound that the personage's voice characteristics data with described extraction matches, determine described
The contextual model that the sound joined is corresponding;Wherein, described contextual model is for identifying incoming call or the alerting pattern of note;
Unit is set, for current contextual model is set to the described contextual model determined.
Terminal the most according to claim 6, it is characterised in that determine that unit carries with described specifically for finding if described
The sound that the personage's voice characteristics data taken matches, according to the default corresponding relation of sound Yu contextual model, determines described
The contextual model that the sound joined is corresponding.
Terminal the most according to claim 6, it is characterised in that determine that unit carries with described specifically for finding if described
The sound that the personage's voice characteristics data taken matches, determines that the contextual model that the sound of described coupling is corresponding is silent mode;
The described unit that arranges specifically for being set to silent mode by current contextual model, and keeps Preset Time.
Terminal the most according to claim 7, it is characterised in that described extraction unit is specifically for adopting every preset time period
Collection ambient sound, extracts personage's voice characteristics data from described ambient sound.
10. according to the terminal described in any one of claim 6 to 9, it is characterised in that the described unit that arranges also includes:
Judging unit, for judging that current contextual model is the most identical with the described contextual model determined;
Switch unit, if different from the described contextual model determined, by described current feelings for described current contextual model
Scape pattern is set to the described contextual model determined.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610831740.9A CN106254677A (en) | 2016-09-19 | 2016-09-19 | A kind of scene mode setting method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610831740.9A CN106254677A (en) | 2016-09-19 | 2016-09-19 | A kind of scene mode setting method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106254677A true CN106254677A (en) | 2016-12-21 |
Family
ID=57598993
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610831740.9A Withdrawn CN106254677A (en) | 2016-09-19 | 2016-09-19 | A kind of scene mode setting method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106254677A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107979694A (en) * | 2017-11-20 | 2018-05-01 | 珠海市魅族科技有限公司 | Incoming call reminding method and device, computer installation and computer-readable recording medium |
CN109597313A (en) * | 2018-11-30 | 2019-04-09 | 新华三技术有限公司 | Method for changing scenes and device |
CN109639904A (en) * | 2019-01-25 | 2019-04-16 | 努比亚技术有限公司 | A kind of handset mode method of adjustment, system and computer storage medium |
CN117478784A (en) * | 2023-12-27 | 2024-01-30 | 珠海格力电器股份有限公司 | Incoming call mode switching method, device, equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015085959A1 (en) * | 2013-12-09 | 2015-06-18 | 腾讯科技(深圳)有限公司 | Voice processing method and device |
CN105391859A (en) * | 2015-11-09 | 2016-03-09 | 小米科技有限责任公司 | Switching method and apparatus of scene modes |
CN105898075A (en) * | 2016-06-14 | 2016-08-24 | 乐视控股(北京)有限公司 | Method and device for automatically adjusting contextual model |
-
2016
- 2016-09-19 CN CN201610831740.9A patent/CN106254677A/en not_active Withdrawn
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015085959A1 (en) * | 2013-12-09 | 2015-06-18 | 腾讯科技(深圳)有限公司 | Voice processing method and device |
CN105391859A (en) * | 2015-11-09 | 2016-03-09 | 小米科技有限责任公司 | Switching method and apparatus of scene modes |
CN105898075A (en) * | 2016-06-14 | 2016-08-24 | 乐视控股(北京)有限公司 | Method and device for automatically adjusting contextual model |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107979694A (en) * | 2017-11-20 | 2018-05-01 | 珠海市魅族科技有限公司 | Incoming call reminding method and device, computer installation and computer-readable recording medium |
CN109597313A (en) * | 2018-11-30 | 2019-04-09 | 新华三技术有限公司 | Method for changing scenes and device |
CN109639904A (en) * | 2019-01-25 | 2019-04-16 | 努比亚技术有限公司 | A kind of handset mode method of adjustment, system and computer storage medium |
CN117478784A (en) * | 2023-12-27 | 2024-01-30 | 珠海格力电器股份有限公司 | Incoming call mode switching method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104750408B (en) | Event notification management method and electronic device | |
CN106254677A (en) | A kind of scene mode setting method and terminal | |
CN106453904A (en) | Information reminding method and terminal | |
CN106569585A (en) | Method and terminal of managing application program process | |
CN108170438A (en) | A kind of application program automatic installation method, terminal and computer-readable medium | |
CN106254626A (en) | A kind of incoming display method and terminal | |
CN106201178A (en) | A kind of adjustment screen display direction control method and terminal | |
CN106156583A (en) | A kind of method of speech unlocking and terminal | |
CN104639758A (en) | Alarm clock control method and alarm clock control device applied to intelligent terminal | |
CN107168602A (en) | One kind control application drawing calibration method and terminal | |
CN106375548A (en) | Method for processing voice information and terminal | |
CN107273111A (en) | A kind of multi-screen display method and terminal | |
CN106469396A (en) | A kind of method of advertisement information and terminal | |
CN108932093A (en) | Split screen application switching method, device, storage medium and electronic equipment | |
CN107197082A (en) | A kind of information prompting method and terminal | |
CN106200976A (en) | A kind of motion-activated method and terminal | |
CN106303353A (en) | A kind of video session control method and terminal | |
CN106303003A (en) | The method of a kind of application recommendation and terminal | |
CN106155554A (en) | A kind of multi-screen display method and terminal | |
CN106250111A (en) | A kind of wallpaper acquisition methods and terminal | |
CN106201639A (en) | A kind of replacing application drawing calibration method and terminal | |
CN106227752A (en) | A kind of photograph sharing method and terminal | |
CN106202493A (en) | A kind of travel information creation method and terminal | |
CN106547539A (en) | A kind of footmark display packing and terminal | |
CN106294023A (en) | A kind of method of data backup and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20161221 |