CN106507280A - A kind of emotion monitoring method and relevant device - Google Patents

A kind of emotion monitoring method and relevant device Download PDF

Info

Publication number
CN106507280A
CN106507280A CN201610972791.3A CN201610972791A CN106507280A CN 106507280 A CN106507280 A CN 106507280A CN 201610972791 A CN201610972791 A CN 201610972791A CN 106507280 A CN106507280 A CN 106507280A
Authority
CN
China
Prior art keywords
user
terminal
state
audio
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610972791.3A
Other languages
Chinese (zh)
Inventor
周红锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Original Assignee
Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yulong Computer Telecommunication Scientific Shenzhen Co Ltd filed Critical Yulong Computer Telecommunication Scientific Shenzhen Co Ltd
Priority to CN201610972791.3A priority Critical patent/CN106507280A/en
Publication of CN106507280A publication Critical patent/CN106507280A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/20Services signaling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Environmental & Geological Engineering (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The embodiment of the invention discloses a kind of emotion monitoring method and relevant device, including:Receive the User Status data that the first terminal for communication connection being set up with equipment is gathered in preset time period, User Status data include audio user data and corresponding acquisition time;Based on audio user data and corresponding acquisition time, determine that the corresponding user emotion state of the audio user data for gathering, user emotion state include unhealthy emotion state or normal emotional state;Obtain the ratio of time of the user in unhealthy emotion state and the time in normal emotional state;If judging, time of the user in unhealthy emotion state and the ratio of the time in normal emotional state are fallen in default proportion, send information to the second terminal for setting up communication connection with the equipment.The embodiment of the invention also discloses corresponding emotion monitoring device and wearable device.The embodiment of the present invention is conducive to superintendent to be learned by the emotional state of superintendent in time.

Description

A kind of emotion monitoring method and relevant device
Technical field
The present invention relates to mobile communication technology field, and in particular to a kind of emotion monitoring method and relevant device.
Background technology
In recent years, with the sustainable development of economic society, increasing youngster selects to seek in the city of non-home Existence and the chance of development, a lot of people after raw son of getting married, as energy is limited, select for child to be placed on native place, by family in Father and mother are on behalf of keeping an eye on.Therefore, the large quantities of minor children of appearance are left on family becomes stay-at-home children, and the people of this colony Number constantly expands, and becomes a kind of commonplace social phenomenon.Due to normal home education disappearance, stay-at-home children The problem that safety problem, the physical and mental development are especially present in terms of virtue, mental health growth in school and way is just increasingly highlighted, it would be highly desirable to Cause the great attention of the whole society.
In this case, the head of a family for the emotion undulating state of child can only often pass through periodically to school teacher or Seeked advice from from other interim superintendents to be understood, said method is unfavorable for that the head of a family understands the emotional state of child in time.
Content of the invention
A kind of emotion monitoring method and relevant device is embodiments provided, is conducive to superintendent to learn in time and is supervised The emotional state of pipe people.
In a first aspect, the embodiment of the present invention provides a kind of emotion monitoring method, including:
Receive the User Status data that the first terminal for communication connection being set up with equipment is gathered in preset time period, its In, the User Status data include audio user data and corresponding acquisition time;
Based on the audio user data and corresponding acquisition time, the corresponding use of the audio user data for gathering is determined Family emotional state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state;
Obtain the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state;
If judging, time of the user in unhealthy emotion state and the ratio of the time in normal emotional state fall into In default proportion, send information to the second terminal that communication connection is set up with the equipment.
Optionally, if judging the ratio of time of the user in unhealthy emotion state and the time in normal emotional state Example is fallen in default proportion, and methods described also includes:
The audio user data are analyzed, the audio user data of the collection are split at least one audio user Data, wherein, the corresponding event of each audio user subdata;
The crucial audio-frequency information of each audio user subdata is extracted, detects whether there is the target critical audio frequency letter for prestoring Cease the crucial audio frequency information matches with the extraction;
If there are the crucial audio frequency information matches of target critical audio-frequency information and the extraction, the target critical sound is obtained The corresponding analysis result of frequency information, the analysis result include:Favourable to the user or described mesh of the target critical audio-frequency information The crucial audio-frequency information of mark is harmful to user;
It is the target critical audio-frequency information is harmful to by user to obtain analysis result, and the target critical audio frequency for prestoring The corresponding solution of information;
By the solution send to the second terminal.
Optionally, user's shape that the first terminal that the reception sets up communication connection with equipment is gathered in preset time period Before state data, methods described also includes:
Obtain the first terminal mark of the carrying first terminal that the first terminal sends and the second terminal of second terminal The bind request of mark;
First terminal mark based on the first terminal is identified with the second terminal of second terminal, sets up described first eventually The binding relationship that end mark is identified with the second terminal.
Optionally, the second terminal to equipment foundation communication connection sends information, including:
Obtain the first terminal mark of the first terminal;
Identified based on the first terminal, determine the second terminal mark that binding is identified with the first terminal;
Corresponding second terminal is identified to described second terminal and sends information.
Second aspect, the embodiment of the present invention provide a kind of emotion monitoring method, it is characterised in that include:
When the state for detecting user switches to waking state by sleep state, User Status data, wherein, institute are gathered Stating User Status data includes audio user data and corresponding acquisition time;
When detecting user and switching to sleep state by waking state, gather when being waking state by the state in user User Status data is activation to local terminal set up communication connection emotion monitoring device so that the emotion monitoring device is based on The audio user data and corresponding acquisition time, determine the corresponding user emotion state of the audio user data for gathering, Wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtains user in the bad feelings The time of not-ready status and the ratio of the time in the normal emotional state, and judging user in unhealthy emotion state Fall in default proportion with the time scale in normal emotional state, communicate to setting up with the emotion monitoring device The second terminal of connection sends information.
The third aspect, the embodiment of the present invention provide a kind of emotion monitoring device, it is characterised in that include:
Receiving unit, adopts in preset time period for receiving the first terminal for setting up communication connection with emotion monitoring device The User Status data of collection, wherein, the User Status data include audio user data and corresponding acquisition time;
Determining unit, for based on the audio user data and corresponding acquisition time, determining user's sound of collection According to corresponding user emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state to frequency;
Acquiring unit, for obtaining time of the user in the unhealthy emotion state and being in the normal emotional state Time ratio;
Transmitting element, if for judge user in unhealthy emotion state time with normal emotional state when Between ratio fall in default proportion, to the equipment set up communication connection second terminal send information.
Optionally, the emotion monitoring device also includes:
Analytic unit, if for judge user in unhealthy emotion state time with normal emotional state when Between ratio fall in default proportion, analyze the audio user data, the audio user data of the collection torn open It is divided at least one audio user subdata, wherein, the corresponding event of each audio user subdata;
Extraction unit, for extracting the crucial audio-frequency information of each audio user subdata, detects whether to exist and prestores The crucial audio frequency information matches of target critical audio-frequency information and the extraction;
The acquiring unit, if be additionally operable to the crucial audio-frequency information that there is target critical audio-frequency information and the extraction Match somebody with somebody, obtain the corresponding analysis result of the target critical audio-frequency information, the analysis result includes:The target critical audio frequency letter Favourable to the user or described target critical audio-frequency information of breath is harmful to user;
The acquiring unit, it is the target critical audio-frequency information is harmful to by user to be additionally operable to obtain analysis result, and in advance The corresponding solution of the target critical audio-frequency information deposited;
The transmitting element, be additionally operable to by the solution send to the second terminal.
Optionally, the emotion monitoring device also includes:
Binding unit, receives in the receiving unit and sets up the first terminal of communication connection default with emotion monitoring device In time period before the User Status data of collection, first for obtaining the carrying first terminal that the first terminal sends is whole The bind request that the second terminal of end mark and second terminal is identified;First terminal mark and second based on the first terminal The second terminal mark of terminal, sets up the binding relationship of the first terminal mark and second terminal mark.
Optionally, the transmitting element, is being pointed out for sending to the second terminal for being set up communication connection with the equipment During information, specifically for obtaining the first terminal mark of the first terminal;Based on the first terminal identify, determine with described The second terminal mark of first terminal mark binding;Corresponding second terminal is identified to described second terminal and sends prompting letter Breath.
Fourth aspect, the embodiment of the present invention provide a kind of wearable device, it is characterised in that include:
Collecting unit, during for switching to waking state in the state for detecting user by sleep state, gathers user's shape State data, wherein, the User Status data include audio user data and corresponding acquisition time;
Transmitting element, for when detecting user and switching to sleep state by waking state, by the state in user being The User Status data is activation gathered during waking state extremely sets up the emotion monitoring device of communication connection with local terminal, so that the feelings Based on the audio user data and corresponding acquisition time, thread monitoring device determines that the audio user data of collection are corresponding User emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtains at user The ratio of the time in the unhealthy emotion state and the time in the normal emotional state, and judging that user is in Unhealthy emotion state with fall in default proportion in the time scale of normal emotional state, to monitoring with the emotion Equipment is set up the second terminal of communication connection and sends information.
In terms of 5th, the embodiment of the present invention provides a kind of emotion monitoring device, it is characterised in that include:
Processor, memorizer, communication interface and communication bus, the processor, the memorizer and the communication interface Connect and complete mutual communication by the communication bus;
The memory storage has executable program code, and the communication interface is used for radio communication;
The processor is used for calling the executable program code in the memorizer, executes the present invention first and implements Whole and part steps described by example either method.
In terms of 6th, the embodiment of the present invention provides a kind of wearable device, it is characterised in that include:
Processor, memorizer, communication interface and communication bus, the processor, the memorizer and the communication interface Connect and complete mutual communication by the communication bus;
The memory storage has executable program code, and the communication interface is used for radio communication;
The processor is used for calling the executable program code in the memorizer, executes the present invention second and implements Whole and part steps described by example either method.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
Description of the drawings
In order to be illustrated more clearly that the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing Accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow sheet of emotion monitoring method disclosed in the embodiment of the present invention;
Fig. 2-a are the schematic flow sheets of another kind of emotion monitoring method disclosed in the embodiment of the present invention;
Fig. 2-b are the schematic flow sheets of another kind of emotion monitoring method disclosed in the embodiment of the present invention;
Fig. 3 is the schematic flow sheet of another kind of emotion monitoring method disclosed in the embodiment of the present invention;
Fig. 4 is a kind of unit composition frame chart of emotion monitoring device disclosed in the embodiment of the present invention;
Fig. 5 is a kind of structural representation of emotion monitoring device disclosed in the embodiment of the present invention;
Fig. 6 is a kind of unit composition frame chart of wearable device disclosed in the embodiment of the present invention;
Fig. 7 is a kind of structural representation of wearable device disclosed in the embodiment of the present invention.
Specific embodiment
In order that those skilled in the art more fully understand the present invention program, below in conjunction with the embodiment of the present invention in Accompanying drawing, to the embodiment of the present invention in technical scheme be clearly and completely described, it is clear that described embodiment is only The a part of embodiment of the present invention, rather than whole embodiments.Embodiment in based on the present invention, those of ordinary skill in the art The every other embodiment obtained under the premise of creative work is not made, belongs to the scope of protection of the invention.
Term " first ", " second " in description and claims of this specification and above-mentioned accompanying drawing etc. are for distinguishing Different objects, rather than be used for describing particular order.Additionally, term " comprising " and " having " and their any deformations, it is intended that It is to cover non-exclusive including.Process, method, system, product or the equipment for for example containing series of steps or unit does not have The step of listing or unit is defined in, but alternatively also includes the step of not listing or unit, or alternatively also wrapped Include other steps intrinsic for these processes, method, product or equipment or unit.
Referenced herein " embodiment " is it is meant that the special characteristic, structure or the characteristic that describe can be wrapped in conjunction with the embodiments It is contained at least one embodiment of the present invention.Each position in the description occur the phrase might not each mean identical Embodiment, nor the independent or alternative embodiment with other embodiments mutual exclusion.Those skilled in the art explicitly and Implicitly it is understood by, embodiment described herein can be combined with other embodiments.
In order to be best understood from a kind of emotion monitoring method and mobile terminal disclosed in the embodiment of the present invention, below to the present invention Embodiment describes in detail.
Fig. 1 is referred to, Fig. 1 is to embodiments provide a kind of schematic flow sheet of emotion monitoring method, the present invention Emotion monitoring method in embodiment is described from emotion monitoring device (equipment) side, as illustrated, this emotion prison Survey method includes:
The User Status number that the first terminal that S101, reception set up communication connection with equipment is gathered in preset time period According to wherein, the User Status data include audio user data and corresponding acquisition time.
Wherein, can independently be arranged by user in the preset time period, or first terminal detects user and is The time period of non-sleep state, for example, first terminal is started working when detecting user waking up in the morning every day, and collection is used Family status data, in evening, when user's entrance sleep state is detected, the User Status data is activation that a day is gathered is to institute State equipment.
Wherein, the User Status that the first terminal that the reception sets up communication connection with equipment is gathered in preset time period Before data, the equipment can also obtain the first terminal mark and second of the carrying first terminal that the first terminal sends The bind request of the second terminal mark of terminal;First terminal mark based on the first terminal is whole with the second of second terminal End mark, sets up the binding relationship of the first terminal mark and second terminal mark.
S102, the audio user data pair gathered based on the audio user data and corresponding acquisition time, determination The user emotion state that answers, wherein, the user emotion state includes unhealthy emotion state or normal emotional state.
Wherein, the audio user data can the audio frequency that laughed at including user and the audio frequency that cries, wherein, user cries The corresponding user of audio frequency is in unhealthy emotion state.
S103, time of the acquisition user in the unhealthy emotion state and the time in the normal emotional state Ratio.
Wherein, the time in the normal emotional state can be by gathering audio user in preset time period Remaining time of the user outside the unhealthy emotion state is removed in data.
Whether S104, the ratio for judging time of the user in unhealthy emotion state and being in the time of normal emotional state Fall in default proportion.
Wherein, the equipment can also obtain the age of the first terminal user of the upload of first terminal, the default ratio Example may range from time of the corresponding improper unhealthy emotion state of age bracket belonging to age of first terminal user with The ratio of the time in normal emotional state.
If S105 judges the ratio of time of the user in unhealthy emotion state and the time in normal emotional state Fall in default proportion, send information to the second terminal that communication connection is set up with the equipment.
Wherein, the user of the first terminal can be that second terminal user is supervision by superintendent (such as child etc.) People (such as head of a family etc.).
Wherein, the specific implementation for sending information to the second terminal for setting up communication connection with the equipment Can be:Obtain the first terminal mark of the first terminal;Identified based on the first terminal, determined and the first terminal The second terminal mark of mark binding;Corresponding second terminal is identified to described second terminal and sends information.
Optionally, if judging the ratio of time of the user in unhealthy emotion state and the time in normal emotional state Example is fallen in default proportion, and the equipment can also carry out following operation:
The audio user data are analyzed, the audio user data of the collection are split at least one audio user Data, wherein, the corresponding event of each audio user subdata;
The crucial audio-frequency information of each audio user subdata is extracted, detects whether there is the target critical audio frequency letter for prestoring Cease the crucial audio frequency information matches with the extraction;
If there are the crucial audio frequency information matches of target critical audio-frequency information and the extraction, the target critical sound is obtained The corresponding analysis result of frequency information, the analysis result include:Favourable to the user or described mesh of the target critical audio-frequency information The crucial audio-frequency information of mark is harmful to user;
It is the target critical audio-frequency information is harmful to by user to obtain analysis result, and the target critical audio frequency for prestoring The corresponding solution of information;
By the solution send to the second terminal.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
Refer to the flow process that Fig. 2-a and 2-b, Fig. 2-a are another kind of emotion monitoring methods provided in an embodiment of the present invention to show It is intended to, Fig. 2-b are the schematic flow sheets of another kind of emotion monitoring method provided in an embodiment of the present invention, in the embodiment of the present invention Emotion monitoring method is described from emotion monitoring device (equipment) side, as illustrated, this emotion monitoring method includes:
S201, the first terminal mark and the second of second terminal for obtaining the carrying first terminal that the first terminal sends The bind request of terminal iidentification.
S202, the first terminal mark based on the first terminal are identified with the second terminal of second terminal, are set up described The binding relationship that first terminal mark is identified with the second terminal.
The User Status number that the first terminal that S203, reception set up communication connection with equipment is gathered in preset time period According to wherein, the User Status data include audio user data and corresponding acquisition time.
S204, the audio user data pair gathered based on the audio user data and corresponding acquisition time, determination The user emotion state that answers, wherein, the user emotion state includes unhealthy emotion state or normal emotional state.
S205, time of the acquisition user in the unhealthy emotion state and the time in the normal emotional state Ratio.
Whether S206, the ratio for judging time of the user in unhealthy emotion state and being in the time of normal emotional state Fall in default proportion.
Wherein, if judging the ratio of time of the user in unhealthy emotion state and the time in normal emotional state Fall in default proportion, the equipment can as shown in Fig. 2-a execution step S207 to step S209, it is also possible to as scheme 2-b execution step S210 to S214.
S207, the first terminal mark for obtaining the first terminal.
S208, based on the first terminal identify, determine with the first terminal identify binding second terminal mark.
S209, corresponding second terminal is identified to described second terminal send information.
The audio user data of the collection are split at least one user by S210, the analysis audio user data Audio frequency subdata, wherein, the corresponding event of each audio user subdata.
Wherein, the corresponding event argument of the event, event argument can include that the time of origin of event, event occur former Cause, user emotion state, language performance of term etc..
S211, the crucial audio-frequency information for extracting each audio user subdata, detect whether there is the target critical for prestoring The crucial audio frequency information matches of audio-frequency information and the extraction.
Wherein, the crucial audio-frequency information can include sensitivity word or sentence.
If S212 has the crucial audio frequency information matches of target critical audio-frequency information and the extraction, the target is obtained The corresponding analysis result of crucial audio-frequency information, the analysis result include:The target critical audio-frequency information favourable to user or The target critical audio-frequency information is harmful to user.
S213, acquisition analysis result are the target critical audio-frequency information is harmful to by user, and the target for prestoring is closed The corresponding solution of key audio-frequency information.
S214, by the solution send to the second terminal.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
Fig. 3 is referred to, Fig. 3 is to embodiments provide a kind of emotion monitoring method, the feelings in the embodiment of the present invention Thread monitoring method is described from wearable device (first terminal) side, as illustrated, this emotion monitoring method includes:
S301, when the state for detecting user switches to waking state by sleep state, gather User Status data, its In, the User Status data include audio user data and corresponding acquisition time;
S302, when detecting user and switching to sleep state by waking state, be waking state by the state in user When the User Status data is activation that gathers to the emotion monitoring device for setting up communication connection with local terminal so that emotion monitoring sets For based on the audio user data and corresponding acquisition time, the corresponding user emotion of the audio user data for gathering is determined State, wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtain user in described not The time of good emotional state and the ratio of the time in the normal emotional state, and judging user in unhealthy emotion State with fall in default proportion in the time scale of normal emotional state, to setting up with the emotion monitoring device The second terminal of communication connection sends information.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
It is apparatus of the present invention embodiment below, apparatus of the present invention embodiment is realized for executing the inventive method embodiment Method.As shown in figure 4, the emotion monitoring device can include receiving unit 401, determining unit 402, acquiring unit 403 with And transmitting element 404, wherein:
The receiving unit 401, for receive with emotion monitoring device set up communication connection first terminal when default Between in section collection User Status data, wherein, the User Status data include audio user data and corresponding collection Time;
The determining unit 402, for based on the audio user data and corresponding acquisition time, determining collection The corresponding user emotion state of audio user data, wherein, the user emotion state includes unhealthy emotion state or positive reason Not-ready status;
The acquiring unit 403, for obtaining time of the user in the unhealthy emotion state and being in described normal The ratio of the time of emotional state;
The transmitting element 404, if for judge time of the user in unhealthy emotion state with normal emotion The ratio of the time of state is fallen in default proportion, is sent to the second terminal for setting up communication connection with the equipment and is carried Show information.
Optionally, the emotion monitoring device also includes:
Analytic unit 405, if for judge time of the user in unhealthy emotion state with normal emotional state The ratio of time fall in default proportion, analyze the audio user data, by the audio user number of the collection According at least one audio user subdata is split into, wherein, the corresponding event of each audio user subdata;
Extraction unit 406, for extracting the crucial audio-frequency information of each audio user subdata, detects whether that presence prestores Target critical audio-frequency information and the extraction crucial audio frequency information matches;
The acquiring unit 403, if be additionally operable to the crucial audio-frequency information that there is target critical audio-frequency information and the extraction Coupling, obtains the corresponding analysis result of the target critical audio-frequency information, and the analysis result includes:The target critical audio frequency Favourable to the user or described target critical audio-frequency information of information is harmful to user;
The acquiring unit 403, it is the target critical audio-frequency information is harmful to by user to be additionally operable to obtain analysis result, and The corresponding solution of the target critical audio-frequency information that prestores;
The transmitting element 404, be additionally operable to by the solution send to the second terminal.
Optionally, the emotion monitoring device also includes:
Binding unit 407, receives the first terminal for setting up communication connection with emotion monitoring device in the receiving unit 401 Before the User Status data gathered in preset time period, for obtaining the carrying first terminal of the first terminal transmission The bind request that the second terminal of first terminal mark and second terminal is identified;First terminal based on the first terminal is identified Identify with the second terminal of second terminal, set up the binding relationship of the first terminal mark and second terminal mark.
Optionally, the transmitting element 404, is being carried for sending to the second terminal for being set up communication connection with the equipment When showing information, specifically for obtaining the first terminal mark of the first terminal;Identified based on the first terminal, determined and institute State the second terminal mark of first terminal mark binding;Corresponding second terminal is identified to described second terminal and sends prompting letter Breath.
It should be noted that the emotion monitoring device described by apparatus of the present invention embodiment is to be in the form of functional unit Existing.Term " unit " used herein above should be understood to implication as most wide as possible, for realizing that each " unit " is described The object of function can for example be integrated circuit ASIC, single circuit, for executing one or more softwares or firmware program Its of above-mentioned functions is realized in processor (shared, special or chipset) and memorizer, combinational logic circuit, and/or offer His suitable component.
For example, receiving unit described above 401 receives the first terminal for setting up communication connection with emotion monitoring device The function of the User Status data gathered in the preset time period can be as shown in Figure 5 mobile terminal can be with realizing, specifically By processor 101 by calling the executable program code in memorizer 102, receive and communication link is set up with emotion monitoring device The User Status data that the first terminal for connecing is gathered in preset time period.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
The embodiment of the present invention additionally provides another kind of emotion monitoring device, as shown in figure 5, including:Processor 501, storage Device 502, communication interface 503 and communication bus 504;Wherein, processor 501, memorizer 502 and communication interface 503 are by communication Bus 504 connects and completes mutual communication;Processor 501 is controlled wireless with External cell net by communication interface 503 Communication;Communication interface 503 includes but is not limited to antenna, amplifier, transceiver, bonder, LNA (Low Noise Amplifier, low-noise amplifier), duplexer etc..Memorizer 502 includes following at least one:Random access memory (RAM), non- Volatile memory and external memory storage, be stored with memorizer 502 executable program code, the executable program code energy Enough bootstrap processors 501 execute the concrete emotion monitoring method for disclosing in the inventive method embodiment, comprise the following steps:
The processor 501 receives the user that the first terminal for setting up communication connection with equipment is gathered in preset time period Status data, wherein, the User Status data include audio user data and corresponding acquisition time;
The processor 501 determines user's sound of collection based on the audio user data and corresponding acquisition time According to corresponding user emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state to frequency;
The processor 501 obtains time of the user in the unhealthy emotion state and is in the normal emotional state Time ratio;
If the processor 501 judge user in unhealthy emotion state time with normal emotional state when Between ratio fall in default proportion, to the equipment set up communication connection second terminal send information.
Optionally, if judging the ratio of time of the user in unhealthy emotion state and the time in normal emotional state Example is fallen in default proportion, and the processor 501 can be also used for executing following operation:
The audio user data are analyzed, the audio user data of the collection are split at least one audio user Data, wherein, the corresponding event of each audio user subdata;
The crucial audio-frequency information of each audio user subdata is extracted, detects whether there is the target critical audio frequency letter for prestoring Cease the crucial audio frequency information matches with the extraction;
If there are the crucial audio frequency information matches of target critical audio-frequency information and the extraction, the target critical sound is obtained The corresponding analysis result of frequency information, the analysis result include:Favourable to the user or described mesh of the target critical audio-frequency information The crucial audio-frequency information of mark is harmful to user;
It is the target critical audio-frequency information is harmful to by user to obtain analysis result, and the target critical audio frequency for prestoring The corresponding solution of information;
By the solution send to the second terminal.
Optionally, user's shape that the first terminal that the reception sets up communication connection with equipment is gathered in preset time period Before state data, the processor 501 can be also used for executing following operation:
Obtain the first terminal mark of the carrying first terminal that the first terminal sends and the second terminal of second terminal The bind request of mark;
First terminal mark based on the first terminal is identified with the second terminal of second terminal, sets up described first eventually The binding relationship that end mark is identified with the second terminal.
Optionally, the processor 501 is sent information to the second terminal for being set up communication connection with the equipment and is obtained Specific implementation can be:
Obtain the first terminal mark of the first terminal;
Identified based on the first terminal, determine the second terminal mark that binding is identified with the first terminal;
Corresponding second terminal is identified to described second terminal and sends information.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
It is apparatus of the present invention embodiment below, apparatus of the present invention embodiment is realized for executing the inventive method embodiment Method.As shown in fig. 6, the wearable device can include collecting unit 601, transmitting element 602, wherein:
The collecting unit 601, during for switching to waking state in the state for detecting user by sleep state, collection User Status data, wherein, the User Status data include audio user data and corresponding acquisition time;
The transmitting element 602, for when detecting user and switching to sleep state by waking state, by user's The User Status data is activation that state is gathered when being waking state extremely sets up the emotion monitoring device of communication connection with local terminal, so that The emotion monitoring device determines the audio user data of collection based on the audio user data and corresponding acquisition time Corresponding user emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtains Time of the user in the unhealthy emotion state and the ratio of the time in the normal emotional state, and judging to use Family in unhealthy emotion state with fall in default proportion in the time scale of normal emotional state, to the feelings Thread monitoring device is set up the second terminal of communication connection and sends information.
It should be noted that the emotion monitoring device described by apparatus of the present invention embodiment is to be in the form of functional unit Existing.Term " unit " used herein above should be understood to implication as most wide as possible, for realizing that each " unit " is described The object of function can for example be integrated circuit ASIC, single circuit, for executing one or more softwares or firmware program Its of above-mentioned functions is realized in processor (shared, special or chipset) and memorizer, combinational logic circuit, and/or offer His suitable component.
For example, collecting unit described above 601 switches to clear-headed shape in the state for detecting user by sleep state During state, the mobile terminal that the function that gathers User Status data can be as shown in Figure 5 can specifically pass through processor realizing 101 by calling the executable program code in memorizer 102, is switched to by sleep state in the state for detecting user clear-headed During state, the function of User Status data is gathered.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
The embodiment of the present invention additionally provides another kind of wearable device, as shown in fig. 7, comprises:Processor 701, memorizer 702, communication interface 703 and communication bus 704;Wherein, processor 701, memorizer 702 and communication interface 703 are total by communication Line 704 connects and completes mutual communication;Processor 701 is by the control of communication interface 703 and the channel radio of External cell net Letter;Communication interface 703 includes but is not limited to antenna, amplifier, transceiver, bonder, LNA (Low Noise Amplifier, low-noise amplifier), duplexer etc..Memorizer 702 includes following at least one:Random access memory (RAM), non- Volatile memory and external memory storage, be stored with memorizer 702 executable program code, the executable program code energy Enough bootstrap processors 701 execute the concrete emotion monitoring method for disclosing in the inventive method embodiment, comprise the following steps:
The processor 701 gathers user's shape when the state for detecting user switches to waking state by sleep state State data, wherein, the User Status data include audio user data and corresponding acquisition time;
State in user when detecting user and switching to sleep state by waking state is by the processor 701 The User Status data is activation gathered during waking state extremely sets up the emotion monitoring device of communication connection with local terminal, so that the feelings Based on the audio user data and corresponding acquisition time, thread monitoring device determines that the audio user data of collection are corresponding User emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtains at user The ratio of the time in the unhealthy emotion state and the time in the normal emotional state, and judging that user is in Unhealthy emotion state with fall in default proportion in the time scale of normal emotional state, to monitoring with the emotion Equipment is set up the second terminal of communication connection and sends information.
As can be seen that emotion monitoring method provided in an embodiment of the present invention, first, emotion monitoring device is received and is built with equipment The User Status data that the first terminal of vertical communication connection is gathered in preset time period, the User Status data include user's sound Frequency according to this and corresponding acquisition time, then, based on audio user data and corresponding acquisition time, determines the use of collection The corresponding user emotion state of family voice data, wherein, user emotion state includes unhealthy emotion state or normal emotional state, Secondly, the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state is obtained, most Eventually, default if judging that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state In proportion, send information to the second terminal that communication connection is set up with the equipment.It can be seen that, emotion monitoring device energy Enough when user is in the overlong time of unhealthy emotion state, send to the second terminal that communication connection is set up with the equipment and carry Show information, and then be conducive to superintendent to learn the real-time that emotion monitoring is lifted by the state of the emotion of superintendent in time.
The embodiment of the present invention also provides a kind of computer-readable storage medium, and wherein, the computer-readable storage medium can be stored with journey Sequence, includes the part or all of step of any emotion monitoring method described in said method embodiment during the program performing Suddenly.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore which is all expressed as a series of Combination of actions, but those skilled in the art should know, the present invention do not limited by described sequence of movement because According to the present invention, some steps can be carried out using other orders or simultaneously.Secondly, those skilled in the art should also know Know, embodiment described in this description belongs to preferred embodiment, involved action and module are not necessarily of the invention Necessary.
In the above-described embodiments, the description of each embodiment is all emphasized particularly on different fields, in certain embodiment, there is no the portion that describes in detail Point, may refer to the associated description of other embodiment.
In several embodiments provided herein, it should be understood that disclosed device, can be by another way Realize.For example, device embodiment described above is only the schematically division of for example described unit, is only one kind Division of logic function, can have when actually realizing other dividing mode, for example multiple units or component can in conjunction with or can To be integrated into another system, or some features can be ignored, or not execute.Another, shown or discussed each other Coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication connection by some interfaces, device or unit, Can be electrical or other forms.
The unit that illustrates as separating component can be or may not be physically separate, aobvious as unit The part for showing can be or may not be physical location, you can be located at a place, or can also be distributed to multiple On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme 's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.Above-mentioned integrated list Unit both can be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If the integrated unit is realized and as independent production marketing or use using in the form of SFU software functional unit When, can be stored in a computer-readable access to memory.Be based on such understanding, technical scheme substantially or Person say the part or technical scheme contributed by prior art all or part can in the form of software product body Reveal and, the computer software product is stored in a memorizer, use so that a computer equipment including some instructions (can be personal computer, server or network equipment etc.) executes all or part of each embodiment methods described of the invention Step.And aforesaid memorizer includes:USB flash disk, read only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), portable hard drive, magnetic disc or CD etc. are various can be with the medium of store program codes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can Completed with instructing the hardware of correlation by program, the program can be stored in a computer-readable memory, memorizer Can include:Flash disk, read only memory (English:Read-Only Memory, referred to as:ROM), random access device (English: Random Access Memory, referred to as:RAM), disk or CD etc..
Above the embodiment of the present invention is described in detail, specific case used herein to the principle of the present invention and Embodiment is set forth, and the explanation of above example is only intended to help and understands the method for the present invention and its core concept; Simultaneously for one of ordinary skill in the art, according to the thought of the present invention, can in specific embodiments and applications There is change part, in sum, this specification content should not be construed as limiting the invention.

Claims (12)

1. a kind of emotion monitoring method, it is characterised in that include:
Receive the User Status data that the first terminal for communication connection being set up with equipment is gathered in preset time period, wherein, institute Stating User Status data includes audio user data and corresponding acquisition time;
Based on the audio user data and corresponding acquisition time, the corresponding user's feelings of the audio user data for gathering are determined Not-ready status, wherein, the user emotion state includes unhealthy emotion state or normal emotional state;
Obtain the ratio of time of the user in the unhealthy emotion state and the time in the normal emotional state;
If it is default to judge that time of the user in unhealthy emotion state is fallen into the ratio of the time in normal emotional state Proportion in, to the equipment set up communication connection second terminal send information.
2. the method for claim 1, it is characterised in that if judging time and place of the user in unhealthy emotion state Ratio in the time of normal emotional state is fallen in default proportion, and methods described also includes:
The audio user data are analyzed, the audio user data of the collection are split at least one audio user subnumber According to, wherein, the corresponding event of each audio user subdata;
Extract the crucial audio-frequency information of each audio user subdata, detect whether to exist the target critical audio-frequency information that prestores with The crucial audio frequency information matches of the extraction;
If there are the crucial audio frequency information matches of target critical audio-frequency information and the extraction, the target critical audio frequency letter is obtained Corresponding analysis result is ceased, the analysis result includes:Favourable to the user or described target of the target critical audio-frequency information is closed Key audio-frequency information is harmful to user;
It is the target critical audio-frequency information is harmful to by user to obtain analysis result, and the target critical audio-frequency information for prestoring Corresponding solution;
By the solution send to the second terminal.
3. the method for claim 1, it is characterised in that the first terminal that the reception sets up communication connection with equipment exists In preset time period before the User Status data of collection, methods described also includes:
The first terminal mark and the second terminal of second terminal for obtaining the carrying first terminal that the first terminal sends is identified Bind request;
First terminal mark based on the first terminal is identified with the second terminal of second terminal, sets up the first terminal mark Know the binding relationship with second terminal mark.
4. method as claimed in claim 3, it is characterised in that described to the second terminal for setting up communication connection with the equipment Information is sent, including:
Obtain the first terminal mark of the first terminal;
Identified based on the first terminal, determine the second terminal mark that binding is identified with the first terminal;
Corresponding second terminal is identified to described second terminal and sends information.
5. a kind of emotion monitoring method, it is characterised in that include:
When the state for detecting user switches to waking state by sleep state, User Status data, wherein, the use are gathered Family status data includes audio user data and corresponding acquisition time;
When detecting user and switching to sleep state by waking state, the use that gathers when being waking state by the state in user Family status data is sent to the emotion monitoring device for being set up communication connection with local terminal, so that the emotion monitoring device is based on described Audio user data and corresponding acquisition time, determine the corresponding user emotion state of the audio user data for gathering, wherein, The user emotion state includes unhealthy emotion state or normal emotional state, and obtains user in the unhealthy emotion state Time and the time in the normal emotional state ratio, and judging user in unhealthy emotion state and being in The time scale of normal emotional state is fallen in default proportion, sets up communication connection to the emotion monitoring device Second terminal sends information.
6. a kind of emotion monitoring device, it is characterised in that include:
Receiving unit, for receiving what the first terminal for setting up communication connection with emotion monitoring device was gathered in preset time period User Status data, wherein, the User Status data include audio user data and corresponding acquisition time;
Determining unit, for based on the audio user data and corresponding acquisition time, determining the audio user number of collection According to corresponding user emotion state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state;
Acquiring unit, for obtain user in the unhealthy emotion state time with the normal emotional state when Between ratio;
Transmitting element, if for judging time of the user in unhealthy emotion state and the time in normal emotional state Ratio is fallen in default proportion, sends information to the second terminal for setting up communication connection with the equipment.
7. emotion monitoring device as claimed in claim 6, it is characterised in that the emotion monitoring device also includes:
Analytic unit, if for judging time of the user in unhealthy emotion state and the time in normal emotional state Ratio is fallen in default proportion, analyzes the audio user data, the audio user data of the collection are split into At least one audio user subdata, wherein, the corresponding event of each audio user subdata;
Extraction unit, for extracting the crucial audio-frequency information of each audio user subdata, detects whether there is the target for prestoring The crucial audio frequency information matches of crucial audio-frequency information and the extraction;
The acquiring unit, if being additionally operable to the crucial audio frequency information matches that there is target critical audio-frequency information and the extraction, obtains The corresponding analysis result of the target critical audio-frequency information is taken, the analysis result includes:The target critical audio-frequency information pair The favourable or described target critical audio-frequency information of user is harmful to user;
The acquiring unit, it is the target critical audio-frequency information is harmful to by user to be additionally operable to obtain analysis result, and prestore The corresponding solution of the target critical audio-frequency information;
The transmitting element, be additionally operable to by the solution send to the second terminal.
8. emotion monitoring device as claimed in claim 6, it is characterised in that the emotion monitoring device also includes:
Binding unit, receives in the receiving unit and sets up the first terminal of communication connection in Preset Time with emotion monitoring device In section before the User Status data of collection, for obtaining the first terminal mark of the carrying first terminal that the first terminal sends Know the bind request with the second terminal mark of second terminal;First terminal mark and second terminal based on the first terminal Second terminal mark, set up the binding relationship of first terminal mark and second terminal mark.
9. emotion monitoring device as claimed in claim 8, it is characterised in that
The transmitting element, when for sending information to the second terminal for setting up communication connection with the equipment, specifically Identify for obtaining the first terminal of the first terminal;Identified based on the first terminal, determined and the first terminal mark Know the second terminal mark of binding;Corresponding second terminal is identified to described second terminal and sends information.
10. a kind of wearable device, it is characterised in that include:
Collecting unit, during for switching to waking state in the state for detecting user by sleep state, gathers User Status number According to wherein, the User Status data include audio user data and corresponding acquisition time;
Transmitting element, for when detecting user and switching to sleep state by waking state, being clear-headed by the state in user The User Status data is activation gathered during state is to the emotion monitoring device for setting up communication connection with local terminal, so that emotion prison Measurement equipment determines the corresponding user of the audio user data for gathering based on the audio user data and corresponding acquisition time Emotional state, wherein, the user emotion state includes unhealthy emotion state or normal emotional state, and obtains user in institute The time of unhealthy emotion state and the ratio of the time in the normal emotional state is stated, and is judging user in bad Emotional state with fall in default proportion in the time scale of normal emotional state, to the emotion monitoring device The second terminal for setting up communication connection sends information.
11. a kind of emotion monitoring devices, it is characterised in that include:
Processor, memorizer, communication interface and communication bus, the processor, the memorizer and the communication interface pass through The communication bus connects and completes mutual communication;
The memory storage has executable program code, and the communication interface is used for radio communication;
The processor is used for calling the executable program code in the memorizer, executes as claim 1-4 is arbitrary Method described by.
12. a kind of wearable devices, it is characterised in that include:
Processor, memorizer, communication interface and communication bus, the processor, the memorizer and the communication interface pass through The communication bus connects and completes mutual communication;
The memory storage has executable program code, and the communication interface is used for radio communication;
The processor is used for calling the executable program code in the memorizer, executes as described in claim 5 Method.
CN201610972791.3A 2016-10-28 2016-10-28 A kind of emotion monitoring method and relevant device Pending CN106507280A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610972791.3A CN106507280A (en) 2016-10-28 2016-10-28 A kind of emotion monitoring method and relevant device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610972791.3A CN106507280A (en) 2016-10-28 2016-10-28 A kind of emotion monitoring method and relevant device

Publications (1)

Publication Number Publication Date
CN106507280A true CN106507280A (en) 2017-03-15

Family

ID=58323238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610972791.3A Pending CN106507280A (en) 2016-10-28 2016-10-28 A kind of emotion monitoring method and relevant device

Country Status (1)

Country Link
CN (1) CN106507280A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108877357A (en) * 2018-06-21 2018-11-23 广东小天才科技有限公司 A kind of exchange method and private tutor's machine based on private tutor's machine
CN109040471A (en) * 2018-10-15 2018-12-18 Oppo广东移动通信有限公司 Emotive advisory method, apparatus, mobile terminal and storage medium
CN115204127A (en) * 2022-09-19 2022-10-18 深圳市北科瑞声科技股份有限公司 Form filling method, device, equipment and medium based on remote flow adjustment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742981A (en) * 2007-02-16 2010-06-16 希盟(中国)科技有限公司 Wearable mini-size intelligent healthcare system
CN101917585A (en) * 2010-08-13 2010-12-15 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for regulating video information sent from visual telephone to opposite terminal
CN103941853A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Electronic device for determining emotion of user and method for determining emotion of user
CN105310703A (en) * 2014-07-02 2016-02-10 北京邮电大学 Method for obtaining subjective satisfaction on basis of pupil diameter data of user
WO2016072595A1 (en) * 2014-11-06 2016-05-12 삼성전자 주식회사 Electronic device and operation method thereof
CN105893771A (en) * 2016-04-15 2016-08-24 北京搜狗科技发展有限公司 Information service method and device and device used for information services

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742981A (en) * 2007-02-16 2010-06-16 希盟(中国)科技有限公司 Wearable mini-size intelligent healthcare system
CN101917585A (en) * 2010-08-13 2010-12-15 宇龙计算机通信科技(深圳)有限公司 Method, device and terminal for regulating video information sent from visual telephone to opposite terminal
CN103941853A (en) * 2013-01-22 2014-07-23 三星电子株式会社 Electronic device for determining emotion of user and method for determining emotion of user
CN105310703A (en) * 2014-07-02 2016-02-10 北京邮电大学 Method for obtaining subjective satisfaction on basis of pupil diameter data of user
WO2016072595A1 (en) * 2014-11-06 2016-05-12 삼성전자 주식회사 Electronic device and operation method thereof
CN105893771A (en) * 2016-04-15 2016-08-24 北京搜狗科技发展有限公司 Information service method and device and device used for information services

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108877357A (en) * 2018-06-21 2018-11-23 广东小天才科技有限公司 A kind of exchange method and private tutor's machine based on private tutor's machine
CN109040471A (en) * 2018-10-15 2018-12-18 Oppo广东移动通信有限公司 Emotive advisory method, apparatus, mobile terminal and storage medium
CN115204127A (en) * 2022-09-19 2022-10-18 深圳市北科瑞声科技股份有限公司 Form filling method, device, equipment and medium based on remote flow adjustment
CN115204127B (en) * 2022-09-19 2023-01-06 深圳市北科瑞声科技股份有限公司 Form filling method, device, equipment and medium based on remote flow adjustment

Similar Documents

Publication Publication Date Title
CN109427333A (en) Activate the method for speech-recognition services and the electronic device for realizing the method
CN106782554A (en) Voice awakening method and device based on artificial intelligence
CN103948398B (en) Be applicable to the heart sound location segmentation method of android system
CN106507280A (en) A kind of emotion monitoring method and relevant device
CN108345676A (en) Information-pushing method and Related product
CN103198838A (en) Abnormal sound monitoring method and abnormal sound monitoring device used for embedded system
CN103489282A (en) Infant monitor capable of identifying infant crying sound and method for identifying infant crying sound
CN108597164B (en) Anti-theft method, anti-theft device, anti-theft terminal and computer readable medium
CN109276255A (en) A kind of limb tremor detection method and device
CN108702421A (en) For controlling using the electronic equipment and method with component
Takano et al. Extracting commercialization opportunities of the Internet of Things: Measuring text similarity between papers and patents
CN103985383A (en) Infant or pet nursing method and nursing system and nursing machine adopting method
CN107105092A (en) A kind of human body tumble recognition methods based on dynamic time warping
CN108156705A (en) A kind of intelligent sound lamp light control system
CN110969805A (en) Safety detection method, device and system
CN105825848A (en) Method, device and terminal for voice recognition
CN110322898A (en) Vagitus detection method, device and computer readable storage medium
CN111524513A (en) Wearable device and voice transmission control method, device and medium thereof
CN102142257B (en) Audio signal processing method and device
CN110111815A (en) Animal anomaly sound monitoring method and device, storage medium, electronic equipment
Dooling Temporal summation of pure tones in birds.
CN105232063B (en) Detection method for mental health of user and intelligent terminal
CN105867641A (en) Screen reading application instruction input method and device based on brain waves
CN104407771A (en) Terminal
CN103605671A (en) Scientific research information evolution analyzing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170315