CN105551504A - Method and device for triggering function application of intelligent mobile terminal based on crying sound - Google Patents
Method and device for triggering function application of intelligent mobile terminal based on crying sound Download PDFInfo
- Publication number
- CN105551504A CN105551504A CN201510882379.8A CN201510882379A CN105551504A CN 105551504 A CN105551504 A CN 105551504A CN 201510882379 A CN201510882379 A CN 201510882379A CN 105551504 A CN105551504 A CN 105551504A
- Authority
- CN
- China
- Prior art keywords
- sound
- mobile terminal
- intelligent mobile
- crying
- triggering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 206010011469 Crying Diseases 0.000 title claims abstract description 84
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000012544 monitoring process Methods 0.000 claims description 12
- 230000001960 triggered effect Effects 0.000 abstract description 9
- 230000036651 mood Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 5
- 230000008451 emotion Effects 0.000 description 4
- 238000007781 pre-processing Methods 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 2
- 238000004891 communication Methods 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72448—User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/011—Emotion or mood input determined on the basis of sensed human body parameters such as pulse, heart rate or beat, temperature of skin, facial expressions, iris, voice pitch, brain activity patterns
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Hospice & Palliative Care (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Child & Adolescent Psychology (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a method and device for triggering function application of an intelligent mobile terminal based on crying sound. The method comprises the following steps of when sound is monitored, extracting a sound feature of the sound, and recording the sound feature as the first sound feature; and if the duration time of the sound is greater than or equal to a first preset value, and the anastomotic degree of the first sound feature and a second sound feature obtained in advance is greater than or equal to a second preset value, triggering the intelligent mobile terminal to execute a consolation function, wherein the second sound feature is the crying sound feature of a current user of the intelligent mobile terminal. After the method is used, and when a user cries, the intelligent mobile terminal can be triggered to actively console the user, so that the sad mood of the user is reduced; the goal that the intelligent mobile terminal actively executes the function application is achieved; the humanized characteristics of the intelligent mobile terminal are embodied; and the user experience is further improved.
Description
Technical Field
The embodiment of the invention relates to the technical field of computer application, in particular to a method and a device for triggering function application of an intelligent mobile terminal based on crying.
Background
With the wide application and development innovation of computer technology, more and more intelligent products appear in the life of people, the intelligent mobile terminal becomes one of popular intelligent products, and the intelligent mobile terminal provides great help for life, study and work of people based on relevant hardware and various application software installed on the intelligent mobile terminal besides the most basic communication function.
The existing intelligent mobile terminal integrates multiple functions into a whole, can be used as a camera for photographing, as a navigator for navigation, as a player for watching videos, as a game machine for entertainment and recreation, and the like, and fully embodies the multifunctionality. Along with the wide use of intelligent mobile terminals, the requirements of people on functional application on the intelligent mobile terminals are higher and higher, the intelligent mobile terminals are expected to automatically sense the requirements of users and actively provide corresponding intelligent application services, but the design of multifunctional application on the existing intelligent mobile terminals is passively served for the users, and the subjective humanization characteristic of the intelligent mobile terminals cannot be well reflected.
Disclosure of Invention
The invention aims to provide a method and a device for triggering function application of an intelligent mobile terminal based on crying, so as to realize the operation of actively executing a consolation function when a user cryes.
On one hand, the embodiment of the invention provides a method for triggering intelligent mobile terminal function application based on crying, which comprises the following steps:
when sound is monitored, extracting sound characteristics of the sound and recording the sound characteristics as first sound characteristics;
and if the duration of the sound is greater than or equal to a first preset value, and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, triggering the intelligent mobile terminal to execute a comfort function, wherein the second sound characteristic is a crying and crying characteristic of a current user of the intelligent mobile terminal.
On the other hand, the embodiment of the invention provides a device for triggering the function application of an intelligent mobile terminal based on crying, which comprises the following steps:
the sound monitoring module is used for extracting sound characteristics of the sound and recording the sound characteristics as first sound characteristics when the sound is monitored;
and the function triggering module is used for triggering the intelligent mobile terminal to execute a consolation function when the duration of the sound is greater than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, wherein the second sound characteristic is a crying sound characteristic of the current user of the intelligent mobile terminal.
The embodiment of the invention provides a method and a device for triggering intelligent mobile terminal function application based on crying, and the method can be summarized as follows: when the voice of the user is monitored, firstly, extracting the voice characteristics of the voice, comparing the voice characteristics with the pre-acquired crying characteristics of the user, and if the preset goodness of fit is achieved and the voice duration is greater than a preset value, determining that the current user of the intelligent mobile terminal is in sad emotion; and then triggering the intelligent mobile terminal to execute the comfort function. By the method, when the user cries, the intelligent mobile terminal can be triggered to actively console the user so as to reduce the sad emotion of the user, the purpose that the intelligent mobile terminal actively executes functional application is achieved, the humanized characteristic of the intelligent mobile terminal is reflected, and the user experience is further improved.
Drawings
Fig. 1 is a schematic flowchart of a method for triggering application of a function of an intelligent mobile terminal based on crying in an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for triggering application of functions of an intelligent mobile terminal based on crying in an embodiment of the present invention;
fig. 3 is a block diagram of a third embodiment of the present invention, which is a structural diagram of a device for triggering an application of an intelligent mobile terminal function based on crying.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a schematic flow diagram of a method for triggering an application of an intelligent mobile terminal function based on crying, which may be executed by a device for triggering an application of an intelligent mobile terminal function based on crying, where the device may be implemented by software and/or hardware and is generally integrated in an intelligent mobile terminal according to an embodiment of the present invention.
As shown in fig. 1, a method for triggering application of an intelligent mobile terminal function based on crying provided in an embodiment of the present invention specifically includes the following operations:
s101, when sound is monitored, extracting sound characteristics of the sound and recording the sound characteristics as first sound characteristics.
In this embodiment, based on the method provided by the present invention, whether a sound appears around the smart mobile terminal can be always monitored, generally, the present invention can monitor all sounds, and preferably, the sounds monitored by the present invention are mainly sounds made by people. Therefore, when human voice is detected, the detected human voice is extracted, and generally, the sound characteristics of the detected sound, which are generally the frequency spectrum of the sound, are mainly extracted and processed. For convenience of description, the present embodiment designates the extracted sound feature as a first sound feature. In this embodiment, the smart mobile terminal is specifically a mobile terminal capable of providing an intelligent application function for a user, such as a smart phone and a tablet computer.
And S102, if the duration of the sound is larger than or equal to a first preset value, and the coincidence degree of the first sound characteristic and a second sound characteristic acquired in advance is larger than or equal to a second preset value, triggering the intelligent mobile terminal to execute a comfort function, wherein the second sound characteristic is a crying sound characteristic of a current user of the intelligent mobile terminal.
In this embodiment, after the sound feature extraction is performed on the monitored sound, the duration of the sound is continuously monitored, and in addition, the comparison and judgment are performed on the extracted first sound feature. Firstly, for monitoring the duration of sound, whether the duration of sound is greater than or equal to a first preset value needs to be judged; then, for comparison and judgment of the first sound characteristic, whether the coincidence degree of the first sound characteristic and a second sound characteristic acquired in advance is larger than or equal to a second preset value needs to be judged; and finally, if the two judgment conditions meet the requirements, the intelligent mobile terminal can be triggered to execute the comfort function.
Further, the first preset value is as follows: 20 seconds; the second preset value is 80%.
In the present embodiment, the first preset value may specifically refer to a preset value set for determining the duration of the sound, and generally, the preset value is preferably set in a time range of 15 seconds to 30 seconds, and is preferably set to 20 seconds, but is not limited to 20 seconds. In addition, the second preset value may specifically refer to a preset value for determining the matching degree between the first sound characteristic and the second sound characteristic, and generally, the preset value may be set to be within a value range of 70% to 100%, and preferably, the preset value is set to be 80%, but not limited to 80%.
In this embodiment, the second sound feature obtained in advance is a crying feature of the current user of the intelligent mobile terminal, and the crying feature of the current user of the intelligent mobile terminal may be obtained through preprocessing. The triggering intelligent mobile terminal to execute the comfort function is specifically represented as follows: the intelligent mobile terminal is triggered to play music, videos or lovely animations, and the intelligent mobile terminal can also be triggered to automatically send a comfort-seeking short message to relatives and friends so as to relieve the sadness of the user.
The method for triggering the function application of the intelligent mobile terminal based on the crying provided by the embodiment of the invention comprises the steps of extracting sound characteristics of monitored user sound, comparing the extracted sound characteristics with the crying characteristics acquired in advance, and triggering the intelligent mobile terminal to execute a consolation function if the sound duration reaches a preset value and the sound characteristics meet an agreement condition. By using the method, the aim of actively executing the comfort function when the user cries is fulfilled, and the humanized characteristic of the intelligent mobile terminal is reflected.
Example two
Fig. 2 is a schematic flow diagram of a method for triggering application of an intelligent mobile terminal function based on crying provided by an embodiment of the present invention, which is optimized based on the above embodiment, in the present embodiment, before "extracting a sound feature of a sound and recording as a first sound feature when the sound is monitored", a step "of collecting sound samples of multiple persons when speaking and crying is added, and a corresponding relationship between the speaking sound and the crying sound is obtained; the method comprises the steps of collecting the speaking voice of a current user of the intelligent mobile terminal, extracting the voice feature of the speaking voice and recording as a third voice feature; and acquiring the crying sound characteristic of the current user based on the corresponding relation between the speaking sound and the crying sound and the third sound characteristic, and recording the crying sound characteristic as a second sound characteristic. "
As shown in fig. 2, a method for triggering application of an intelligent mobile terminal function based on crying provided in an embodiment of the present invention specifically includes the following steps:
step S201, collecting sound samples of a plurality of people during speaking and crying, and obtaining the corresponding relation between the speaking sound and the crying sound.
In this embodiment, before triggering the comfort function of the smart mobile terminal based on the crying sound of the user, certain preprocessing work is required, for example, collecting a sound sample, extracting sound features of the sound sample, and the like, so step S201 mainly includes collecting and processing the sound sample.
Specifically, this step can be described as: collecting and recording the sound of a plurality of persons who normally speak, and collecting and recording the crying sound of the persons; then, processing the collected sound samples of the speaking sound and the crying sound by a voice analysis system, and analyzing the relation between the sound of the normal speaking and the crying sound of people by the voice analysis system; and finally, acquiring the corresponding relation between the speaking voice and the crying voice of the people. Generally, the corresponding relationship between the speaking sound and the crying sound is mainly analyzed and extracted through the spectrogram of the speaking sound and the crying sound.
Step S202, the speaking voice of the current user of the intelligent mobile terminal is collected, the voice feature of the speaking voice is extracted, and the voice feature is recorded as a third voice feature.
In this embodiment, when the preprocessing is performed before the consolation function of the smart mobile terminal is triggered based on the crying sound of the user, the method further includes an operation of performing sample collection on the speaking sound of the current user of the smart mobile terminal, where step S202 may be understood as an operation of performing collection and analysis on a sound sample of the current user of the smart mobile terminal.
Specifically, this step can be described as: when a current user of the intelligent mobile terminal carries out a call, acquiring voice information of the current user in the call process, or recording the speaking voice of the current user of the intelligent mobile terminal to realize voice sample acquisition; and then, carrying out voice analysis on the collected speaking voice of the current user, and extracting the voice characteristics of the speaking voice of the current user. For convenience of description, the sound characteristic of the current speech sound of the user is referred to as a third sound characteristic.
And S203, acquiring the crying sound characteristic of the current user based on the corresponding relation between the speaking sound and the crying sound and the third sound characteristic, and recording as a second sound characteristic.
In this embodiment, based on the correspondence between the uttered sound and the crying sound acquired in step S201 and the sound feature (third sound feature) of the uttered sound of the current user of the smart mobile terminal acquired in step S202, the sound feature when the current user cry can be simulated, and for convenience of expression, the simulated crying sound feature of the current user is recorded as the second sound feature.
And step S204, when the sound is monitored, extracting the sound characteristic of the sound and recording the sound characteristic as a first sound characteristic.
In this embodiment, after the operations of steps S201 to S203 are completed, the preprocessing operation of triggering the consolation function of the intelligent mobile terminal based on the crying sound of the user is completed. Then, sound monitoring is started based on the step S204, sound around the intelligent mobile terminal is monitored in real time, and the monitored sound is preferably the sound emitted by a person generally; when the occurrence of sound is monitored, the monitored sound is processed, and sound features are extracted, wherein the sound features are first sound features.
Further, the first sound characteristic and the second sound characteristic are both sound spectrograms.
Specifically, the sound features in the method of the present invention, such as the first sound feature and the second sound feature, refer to sound spectrograms. The sound spectrogram can be understood as an image distribution curve arranged according to sound frequencies. The sound spectrum may be divided into a treble band, a midbass band, and a bass band based on the size of frequency, and generally, the sound spectrum is mainly used for analysis of tone and pitch, thereby identifying different sounds.
Step S205, judging whether the duration time of the sound is larger than or equal to a first preset value, if so, executing step S206; if not, the process returns to step S204.
For example, step S205 may be understood as: judging whether the duration time of the monitored sound is greater than or equal to 20 seconds, and if so, executing a step S206; if the time does not reach 20 seconds, the process returns directly to step S204. It should be noted that the sound duration time of 20 seconds in the present embodiment is a preferred value, and is not limited to this time length.
Step S206, judging whether the coincidence degree of the first sound characteristic and the second sound characteristic is greater than or equal to a second preset value, if so, executing step S207; if not, the process returns to step S204.
Exemplarily, step S206 can be understood as: judging whether the coincidence degree of the sound characteristic of the monitored sound and the second sound characteristic acquired in advance is greater than or equal to 80%, and if the coincidence degree is greater than or equal to 80%, executing step S207; otherwise, it is possible to directly return to step S204.
In the present embodiment, the goodness of fit of the sound features may be specifically understood as a similarity or a coincidence degree of the sound spectrogram. It should be noted that the goodness of fit set in this embodiment is also a preferred value, and is not limited to this value; further, the following step S207 can be executed only when the determination conditions of step S205 and step S206 are satisfied at the same time.
And step S207, triggering the intelligent mobile terminal to execute a consolation function.
In this embodiment, the final purpose of the method of the present invention is to realize automatic triggering of the consolation function of the intelligent mobile terminal based on the cry of the user, that is, the intelligent mobile terminal may respond after recognizing the cry of the user, actively placate the current user of the intelligent mobile terminal, and alleviate the sad emotion.
Further, the triggering the intelligent mobile terminal to execute the comfort function specifically includes: triggering the intelligent mobile terminal to play a preset music or animation video; and/or triggering the intelligent mobile terminal to send a preset comfort seeking short message to a preset mobile phone number.
In this embodiment, the intelligent mobile terminal has multiple expression forms for executing the comfort function, specifically, the intelligent mobile terminal can be triggered to play music or animation videos, wherein the played music is stored in the intelligent mobile terminal memory card in advance, the specific music content can be set manually, and the set music content can be words which are recorded manually in advance and are used for comfort, or can be songs favored by the user, and the like; similarly, the played video is also stored in the memory card of the intelligent mobile terminal in advance, the played video content can also be set manually, and the content can be a real video which is recorded in advance, and can also be a favorite program or animation video of the user.
In addition, the method can also trigger the intelligent mobile terminal to automatically send the comfort seeking short message to the relatives and friends, wherein before the short message is sent, the target object and the mobile phone number of the short message are required to be manually set; meanwhile, the sent short message content seeking comfort is also edited in advance and stored in a corresponding folder.
The second embodiment of the invention provides a crying-based method for triggering the function application of an intelligent mobile terminal, which comprises the steps of firstly collecting samples of speaking and crying of a plurality of people, and extracting and analyzing the corresponding relation between the speaking and crying; then, the speaking voice of the current user of the intelligent mobile terminal is collected; then, based on the corresponding relation and the speaking voice of the user, a crying characteristic sample of the user is obtained in advance; finally, comparing the sound characteristics obtained in the monitoring process with the crying characteristic samples, and automatically triggering the consolation function of the intelligent mobile terminal to enable the intelligent mobile terminal to automatically play music or videos; or the intelligent mobile terminal automatically sends the short message requiring comfort to the preset relatives and friends. By using the method, the intelligent mobile terminal is triggered to automatically execute the comfort function based on the recognition of the crying characteristics, the humanized characteristic of the intelligent mobile terminal is reflected, and the user experience of the user on the intelligent mobile terminal is improved.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a device for triggering application of an intelligent mobile terminal function based on crying provided by a third embodiment of the present invention. The apparatus may be implemented by software and/or hardware and is generally integrated in an intelligent mobile terminal. As shown in fig. 3, the specific structure of the apparatus is as follows: a sound monitoring module 311 and a function triggering module 312. Wherein,
the sound monitoring module 311 is configured to, when a sound is monitored, extract a sound feature of the sound and record the sound feature as a first sound feature;
and the function triggering module 312 is configured to trigger the intelligent mobile terminal to execute a consolation function when the duration of the sound is greater than or equal to a first preset value, and an agreement degree between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, where the second sound characteristic is a crying sound characteristic of a current user of the intelligent mobile terminal.
Further, the apparatus further comprises: a first acquisition module 301, a second acquisition module 302 and a crying feature acquisition module 303. Wherein,
the first acquisition module 301 is used for acquiring sound samples of a plurality of people during speaking and crying and analyzing the corresponding relation between the speaking sound and the crying sound;
the second acquisition module 302 is configured to acquire the speaking voice of the current user of the intelligent mobile terminal, extract the voice feature of the speaking voice, and record the voice feature as a third voice feature;
and the crying feature acquisition module 303 is configured to acquire a crying feature of the current user based on the correspondence between the speaking voice and the crying voice and the third sound feature, and record the crying feature as a second sound feature.
In the embodiment, the device firstly acquires sound samples of a plurality of people during speaking and crying through the first acquisition module 301 to obtain the corresponding relation between the speaking sound and the crying sound; then, the second acquisition module 302 acquires the speaking voice of the current user of the intelligent mobile terminal; then, the crying characteristic acquisition module 303 determines the sound characteristics of the crying of the current user based on the collected speaking voice of the current user and the obtained corresponding relationship between the speaking voice and the crying; then, whether a sound appears is monitored at any time based on the sound monitoring module 311, and when the sound appears, the sound feature of the sound is extracted; finally, whether the coincidence degree of the extracted sound characteristics and the determined sound characteristics of the crying sound of the user is greater than or equal to a preset value or not is judged through the function triggering module 312, whether the duration time of the monitored sound is greater than or equal to a preset value or not is judged, and if the conditions are met, the intelligent mobile terminal is automatically triggered to execute a comfort function, such as playing preset music or animation videos or sending preset short messages requiring comfort to relatives and friends.
On the basis of the foregoing embodiment, the function triggering module 312 is specifically configured to: when the duration time of the sound monitored by the sound monitoring module is larger than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic obtained in advance is larger than or equal to a second preset value, triggering the intelligent mobile terminal to play a preset music or animation video; and/or
And when the duration of the sound monitored by the sound monitoring module is greater than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, triggering the intelligent mobile terminal to send a preset comfort seeking short message to a preset mobile phone number.
Meanwhile, in the device for triggering the function application of the intelligent mobile terminal based on the crying sound, the first preset value is as follows: 20 seconds; the second preset value is as follows: 80 percent.
In addition, on the basis of the above embodiment, the first sound characteristic and the second sound characteristic are both sound spectrograms.
According to the device for triggering function application of the intelligent mobile terminal based on crying provided by the embodiment of the invention, when a user cryes, the intelligent mobile terminal can actively play music or video to comfort the user, or the intelligent mobile terminal automatically sends a comfort-seeking short message to relatives and friends of the user, so that the sad emotion of the user is reduced, the purpose that the intelligent mobile terminal actively executes the comfort function when the user cryes is achieved, and the humanized characteristic of the intelligent mobile terminal is embodied.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.
Claims (10)
1. A method for triggering function application of an intelligent mobile terminal based on crying is characterized by comprising the following steps:
when sound is monitored, extracting sound characteristics of the sound and recording the sound characteristics as first sound characteristics;
and if the duration of the sound is greater than or equal to a first preset value, and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, triggering the intelligent mobile terminal to execute a comfort function, wherein the second sound characteristic is a crying and crying characteristic of a current user of the intelligent mobile terminal.
2. The method of claim 1, wherein prior to said extracting and recording as a first sound feature a sound feature of said sound when said sound is detected, further comprising:
collecting sound samples of a plurality of people during speaking and crying, and acquiring the corresponding relation between the speaking sound and the crying sound;
the method comprises the steps of collecting the speaking voice of a current user of the intelligent mobile terminal, extracting the voice feature of the speaking voice and recording as a third voice feature;
and acquiring the crying sound characteristic of the current user based on the corresponding relation between the speaking sound and the crying sound and the third sound characteristic, and recording the crying sound characteristic as a second sound characteristic.
3. The method according to claim 1, wherein the triggering of the intelligent mobile terminal to perform the comfort function specifically includes:
triggering the intelligent mobile terminal to play a preset music or animation video; and/or
And triggering the intelligent mobile terminal to send a preset comfort seeking short message to a preset mobile phone number.
4. The method according to any one of claims 1 to 3, wherein the first preset value is: 20 seconds;
the second preset value is as follows: 80 percent.
5. The method of any of claims 1-3, wherein the first and second sound features are both sound spectrograms.
6. The utility model provides a device based on cry triggers intelligent mobile terminal function application which characterized in that includes:
the sound monitoring module is used for extracting sound characteristics of the sound and recording the sound characteristics as first sound characteristics when the sound is monitored;
and the function triggering module is used for triggering the intelligent mobile terminal to execute a consolation function when the duration of the sound is greater than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, wherein the second sound characteristic is a crying sound characteristic of the current user of the intelligent mobile terminal.
7. The apparatus of claim 6, further comprising:
the first acquisition module is used for acquiring sound samples of a plurality of people during speaking and crying and acquiring the corresponding relation between the speaking sound and the crying sound;
the second acquisition module is used for acquiring the speaking voice of the current user of the intelligent mobile terminal, extracting the voice characteristics of the speaking voice and recording the voice characteristics as third voice characteristics;
and the crying feature acquisition module is used for acquiring the crying feature of the current user based on the corresponding relation between the speaking sound and the crying sound and the third sound feature and recording the crying feature as a second sound feature.
8. The apparatus according to claim 6, wherein the function triggering module is specifically configured to:
when the duration time of the sound monitored by the sound monitoring module is larger than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic obtained in advance is larger than or equal to a second preset value, triggering the intelligent mobile terminal to play a preset music or animation video; and/or
And when the duration of the sound monitored by the sound monitoring module is greater than or equal to a first preset value and the goodness of fit between the first sound characteristic and a second sound characteristic acquired in advance is greater than or equal to a second preset value, triggering the intelligent mobile terminal to send a preset comfort seeking short message to a preset mobile phone number.
9. The apparatus according to any of claims 6-8, wherein the first preset value is: 20 seconds;
the second preset value is as follows: 80 percent.
10. The apparatus of any of claims 6-8, wherein the first and second sound features are sound spectrograms.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510882379.8A CN105551504B (en) | 2015-12-03 | 2015-12-03 | A kind of method and device based on crying triggering intelligent mobile terminal functional application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510882379.8A CN105551504B (en) | 2015-12-03 | 2015-12-03 | A kind of method and device based on crying triggering intelligent mobile terminal functional application |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105551504A true CN105551504A (en) | 2016-05-04 |
CN105551504B CN105551504B (en) | 2019-04-23 |
Family
ID=55830652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510882379.8A Expired - Fee Related CN105551504B (en) | 2015-12-03 | 2015-12-03 | A kind of method and device based on crying triggering intelligent mobile terminal functional application |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105551504B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108961887A (en) * | 2018-07-24 | 2018-12-07 | 广东小天才科技有限公司 | Voice search control method and family education equipment |
CN108960157A (en) * | 2018-07-09 | 2018-12-07 | 广东小天才科技有限公司 | Man-machine interaction method based on intelligent desk lamp and intelligent desk lamp |
CN110874909A (en) * | 2018-08-29 | 2020-03-10 | 杭州海康威视数字技术股份有限公司 | Monitoring method, system and readable storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2701025Y (en) * | 2004-05-24 | 2005-05-18 | 洪鸿文 | Electronic apparatus for calming crying babies through sound controlled automatic voices |
CN101064104A (en) * | 2006-04-24 | 2007-10-31 | 中国科学院自动化研究所 | Emotion voice creating method based on voice conversion |
CN101086741A (en) * | 2006-06-09 | 2007-12-12 | 索尼株式会社 | Information processing apparatus and information processing method |
CN102881284A (en) * | 2012-09-03 | 2013-01-16 | 江苏大学 | Unspecific human voice and emotion recognition method and system |
CN103489282A (en) * | 2013-09-24 | 2014-01-01 | 华南理工大学 | Infant monitor capable of identifying infant crying sound and method for identifying infant crying sound |
CN104347066A (en) * | 2013-08-09 | 2015-02-11 | 盛乐信息技术(上海)有限公司 | Deep neural network-based baby cry identification method and system |
CN105015608A (en) * | 2015-06-29 | 2015-11-04 | 叶秀兰 | Intelligent infant trolley |
-
2015
- 2015-12-03 CN CN201510882379.8A patent/CN105551504B/en not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN2701025Y (en) * | 2004-05-24 | 2005-05-18 | 洪鸿文 | Electronic apparatus for calming crying babies through sound controlled automatic voices |
CN101064104A (en) * | 2006-04-24 | 2007-10-31 | 中国科学院自动化研究所 | Emotion voice creating method based on voice conversion |
CN101086741A (en) * | 2006-06-09 | 2007-12-12 | 索尼株式会社 | Information processing apparatus and information processing method |
CN102881284A (en) * | 2012-09-03 | 2013-01-16 | 江苏大学 | Unspecific human voice and emotion recognition method and system |
CN104347066A (en) * | 2013-08-09 | 2015-02-11 | 盛乐信息技术(上海)有限公司 | Deep neural network-based baby cry identification method and system |
CN103489282A (en) * | 2013-09-24 | 2014-01-01 | 华南理工大学 | Infant monitor capable of identifying infant crying sound and method for identifying infant crying sound |
CN105015608A (en) * | 2015-06-29 | 2015-11-04 | 叶秀兰 | Intelligent infant trolley |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108960157A (en) * | 2018-07-09 | 2018-12-07 | 广东小天才科技有限公司 | Man-machine interaction method based on intelligent desk lamp and intelligent desk lamp |
CN108961887A (en) * | 2018-07-24 | 2018-12-07 | 广东小天才科技有限公司 | Voice search control method and family education equipment |
CN110874909A (en) * | 2018-08-29 | 2020-03-10 | 杭州海康威视数字技术股份有限公司 | Monitoring method, system and readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN105551504B (en) | 2019-04-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108159702B (en) | Multi-player voice game processing method and device | |
CN108874895B (en) | Interactive information pushing method and device, computer equipment and storage medium | |
CN110602624B (en) | Audio testing method and device, storage medium and electronic equipment | |
CN109065051B (en) | Voice recognition processing method and device | |
CN112653902B (en) | Speaker recognition method and device and electronic equipment | |
CN110400566B (en) | Identification method and electronic equipment | |
CN108847221B (en) | Voice recognition method, voice recognition device, storage medium and electronic equipment | |
CN111261195A (en) | Audio testing method and device, storage medium and electronic equipment | |
US20210118464A1 (en) | Method and apparatus for emotion recognition from speech | |
CN104091596A (en) | Music identifying method, system and device | |
CN105551504B (en) | A kind of method and device based on crying triggering intelligent mobile terminal functional application | |
US20240004606A1 (en) | Audio playback method and apparatus, computer readable storage medium, and electronic device | |
CN113129893B (en) | Voice recognition method, device, equipment and storage medium | |
CN107451185B (en) | Recording method, reading system, computer readable storage medium and computer device | |
CN110211609A (en) | A method of promoting speech recognition accuracy | |
CN111161746B (en) | Voiceprint registration method and system | |
CN107977187B (en) | Reverberation adjusting method and electronic equipment | |
CN107767862B (en) | Voice data processing method, system and storage medium | |
CN110197663B (en) | Control method and device and electronic equipment | |
JP2011223369A (en) | Conversation system for patient with cognitive dementia | |
CN109271480B (en) | Voice question searching method and electronic equipment | |
CN112492400A (en) | Interaction method, device, equipment, communication method and shooting method | |
CN114329042A (en) | Data processing method, device, equipment, storage medium and computer program product | |
CN113066513B (en) | Voice data processing method and device, electronic equipment and storage medium | |
EP3288035A2 (en) | Personal audio lifestyle analytics and behavior modification feedback |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20190423 |
|
CF01 | Termination of patent right due to non-payment of annual fee |