CN116072102A - Emotion recognition method, device, equipment and storage medium - Google Patents

Emotion recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN116072102A
CN116072102A CN202111299042.6A CN202111299042A CN116072102A CN 116072102 A CN116072102 A CN 116072102A CN 202111299042 A CN202111299042 A CN 202111299042A CN 116072102 A CN116072102 A CN 116072102A
Authority
CN
China
Prior art keywords
emotion
intensity
feature
set period
emotion feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111299042.6A
Other languages
Chinese (zh)
Inventor
张亚歌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN202111299042.6A priority Critical patent/CN116072102A/en
Priority to PCT/CN2022/108192 priority patent/WO2023077883A1/en
Publication of CN116072102A publication Critical patent/CN116072102A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The embodiment of the application discloses a method, a device, equipment and a storage medium for emotion recognition. Acquiring a first emotion feature of a user at the current moment in a set period; adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature; judging whether the current moment is the ending time point of the set period; if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period; if yes, determining the target emotion of the user in the set period according to the intensity of the second emotion feature. According to the emotion recognition method, stable emotion of the user in a period of time can be obtained, and therefore accuracy of emotion recognition can be improved.

Description

Emotion recognition method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of emotion recognition, in particular to an emotion recognition method, device, equipment and storage medium.
Background
At present, emotion recognition by artificial intelligence (Artificial Intelligence, AI) technology has become an important component of emotion calculation. The content of emotion recognition research comprises aspects of facial expression, voice, heart rate, behavior, text, physiological signal recognition and the like, and the emotion state of a user is judged through the content.
In the prior art, the timeliness and stability of emotion are not distinguished, and inaccuracy or delay of emotion recognition can be caused.
Disclosure of Invention
The embodiment of the application provides a method, a device, equipment and a storage medium for emotion recognition, which can obtain stable emotion of a user in a period of time, so that the accuracy of emotion recognition can be improved.
In order to achieve the above object, an embodiment of the present application discloses an emotion recognition method, including:
acquiring a first emotion feature of a user at the current moment in a set period; wherein the first emotional characteristic comprises: time, category, and intensity;
adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature;
judging whether the current moment is the ending time point of the set period;
if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period;
if yes, determining the target emotion of the user in the set period according to the intensity of the second emotion feature.
To achieve the above object, an embodiment of the present application discloses an emotion recognition device, including:
the first emotion feature acquisition module is used for acquiring first emotion features of the user at the current moment in a set period; wherein the first emotional characteristic comprises: time, category, and intensity;
a second emotional characteristic adjustment module, configured to adjust, according to the first emotional characteristic, an intensity of a second emotional characteristic that has been obtained in the set period of time, and take the first emotional characteristic as a new second emotional characteristic;
the time judging module is used for judging whether the current time is the ending time point of the set time period or not;
the return execution module is used for acquiring a next first emotion feature when the current moment is not the ending time point of the set period, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period;
and the target emotion determining module is used for determining the target emotion of the user in the set period according to the intensity of the second emotion characteristic when the current moment is the ending time point of the set period.
To achieve the above object, an embodiment of the present application discloses an emotion recognition device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the emotion recognition method according to the embodiment of the present application when executing the program.
To achieve the above object, embodiments of the present application disclose a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an emotion recognition method as described in embodiments of the present application.
The embodiment of the application discloses a method, a device, equipment and a storage medium for emotion recognition. Acquiring a first emotion feature of a user at the current moment in a set period; wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters; adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature; judging whether the current moment is the ending time point of the set period; if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period; if so, determining the target emotion of the user in the set period according to the intensity of the second emotion feature. According to the emotion recognition method provided by the embodiment, in the set period, each time one emotion feature is acquired, the intensity of the acquired emotion feature is adjusted according to the newly acquired emotion feature until the ending time point of the set period is reached, so that the target emotion of the user in the set period is determined according to the intensity of each emotion feature, the stable emotion of the user in a period of time can be obtained, and the accuracy of emotion recognition can be improved.
Drawings
Fig. 1 is a flowchart of a method for emotion recognition according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an emotion recognition device according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an emotion recognition device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in detail hereinafter with reference to the accompanying drawings. It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be arbitrarily combined with each other.
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "part" or "unit" for representing elements are used only for facilitating the description of the present invention, and have no particular meaning in themselves. Thus, "module," "component," or "unit" may be used in combination.
In one embodiment, fig. 1 is a flowchart of a method for identifying emotion provided in an embodiment of the present application, where the method may be applicable to identifying a user emotion. The method may be performed by a mobile terminal. As shown in FIG. 1, the method includes S110-S150.
S110, acquiring a first emotion feature of the user at the current moment in a set period.
Wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters. The emotion categories may be categories summarized empirically or under investigation, and may include, for example: surprise, happiness, boring, sadness, fear, etc. The set period may be a period set by the user or a period selected by the user, for example: setting a period of 3 days every 3 days; or the user selects monday to wednesday as the set period, and the set period is not limited here.
In this embodiment, the emotional characteristics of the user at the current moment may be determined by, but not limited to, recording the voice signal of the user, monitoring the music played by the user, recording the search record of the user, and the like.
Specifically, the manner of obtaining the first emotional characteristic of the user at the current moment in the set period may be: collecting any one or combination of voice signals of a user, music signals played by the user and search records; and obtaining the first emotion characteristics of the user at the current moment according to any one or combination of the voice signal, the music signal and the search record.
Wherein the strength of the speech signal exceeds a first set threshold. The intensity of the voice signal can be calculated by the accumulated value of the intensity of the electric signal corresponding to the sound in a certain sampling interval. In this embodiment, when the voice signal of the user is collected, the period of collection may be limited, the sleeping period at night may be excluded, and the collection of the voice signal is not performed in the sleeping period (e.g. 23:00-7:00). Moreover, only the voice with the intensity exceeding the first set threshold value is collected, and the voice signal lower than the first set threshold value is not collected and processed, so that the power consumption of the terminal can be reduced, and the collection and processing of invalid voice signals are avoided. After the speech signal is obtained, the speech signal may be processed using artificial intelligence (Artificial Intelligence, AI) techniques to obtain a first emotional characteristic corresponding to the speech signal.
The collection of the music signals played by the user can be triggered by the user starting the music software on the terminal, the music signals can be obtained when the music is played, and the music signals can be processed by adopting an AI technology to obtain the first emotion characteristics corresponding to the music signals.
The collection of the search records can be triggered by a user starting search software on the terminal, search keywords are obtained by capturing page jump addresses, or the search records of the user are obtained by cooperation with a manufacturer or a search engine, and the search records are processed by adopting an AI technology to obtain first emotion characteristics corresponding to the search records.
In this embodiment, the AI technique may be adopted to directly obtain the category and the intensity in the first emotional feature; or acquiring the emotion category by adopting an AI technology, assigning initial intensity to the emotion category, and taking the current moment as a time parameter in the first emotion feature, thereby acquiring the first emotion feature. Wherein the initial intensity may be set by the user, for example: different initial intensities may be assigned to different emotion categories, or the same initial intensity may be assigned to all emotion categories; or determining initial intensity according to the source of the emotion feature, if the emotion feature is obtained by identifying the voice signal, assigning a higher initial intensity, and if the emotion feature is obtained by identifying the music signal or searching the record, assigning a lower initial intensity. Namely, the emotion characteristics extracted by the voice signals are mainly used, and the emotion characteristics extracted by other means are auxiliary.
And S120, adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature.
In this embodiment, different emotions affect each other, such as "sadness" after "happiness", which reduces the strength of "happiness", and "surprise" after "happiness", which increases the strength of "happiness".
Specifically, the manner of adjusting the intensity of the second emotional characteristic that has been obtained in the set period according to the first emotional characteristic may be: acquiring an influence relation and an intensity adjustment quantity between the first emotion feature and the second emotion feature; and adjusting the intensity of the second emotion characteristics obtained in the set period according to the relation and the intensity adjustment quantity.
The influence relationships among the emotion features can comprise positive influence relationships, negative influence relationships and no influence relationships. For example: "happy" and "surprise" are positive effects, "happy" and "sad" are negative effects, and "boring" and "hard" are no effects. In this embodiment, the relationship between the emotional features of the same or similar category is a positive influence relationship, the relationship between the emotional features of opposite category is a negative influence relationship, and the relationship between the emotional features of different and unrelated categories is a no influence relationship. The amount of adjustment between emotional features may be preset by the user.
The manner of adjusting the intensity of the second emotion feature obtained in the set period according to the influence relationship and the intensity adjustment amount may be: if the first emotion feature and the second emotion feature are in positive influence relation, accumulating the intensity of the second emotion feature by an adjustment quantity; if the first emotion feature and the second emotion feature are in a negative influence relationship, subtracting the adjustment quantity from the intensity of the second emotion feature; if the first emotion feature and the second emotion feature are in a non-influence relationship, the strength of the second emotion feature is kept unchanged.
In this embodiment, if the first emotional characteristic and the second emotional characteristic are in positive influence relationship, the intensity of the second emotional characteristic is enhanced; if the first emotion feature and the second emotion feature are in a negative influence relationship, weakening the strength of the second emotion feature; if the first emotion feature and the second emotion feature are in a non-influence relationship, the intensity of the second emotion feature is kept unchanged. For example, assume that the first emotional characteristic of the user at the current time is a, and the second emotional characteristic B (intensity is S1) and the second emotional characteristic C (intensity is S2) have been acquired within a set period of time. The first emotion feature is that A and the second emotion feature B are in positive influence relation, the intensity adjustment quantity between the first emotion feature A and the second emotion feature B is DeltaS 1, and the intensity of the adjusted second emotion feature B is S1+ DeltaS1; the first emotion feature is that A and the second emotion feature C are in negative influence relation, and the intensity adjustment quantity between the first emotion feature is that A and the second emotion feature C is that delta S2, and the intensity of the adjusted second emotion feature C is that S2-delta S2.
In this embodiment, the manner of taking the first emotional characteristic as the new second emotional characteristic may be: if the category of the first emotional feature is different from the category of the obtained second emotional feature, taking the first emotional feature and the obtained second emotional feature as new second emotional features; otherwise, the first emotional characteristic is deleted. That is, only one emotional characteristic is stored in each emotional category in a set period, and the time of the emotional characteristic is the moment when the emotional characteristic is first identified.
S130, judging whether the current moment is the ending time point of the set period, if not, executing step 140, and if so, executing step 150.
And S140, acquiring the next first emotion feature, and returning to S110 until the ending time point of the set period.
In this embodiment, every time an emotion feature is identified in the set period, the intensity of the obtained emotion feature is adjusted according to the newly obtained emotion feature until the ending time point of the set period is reached, so as to ensure accumulation of the intensities of the emotion features in the set period.
Illustratively, the process of determining the emotion of the user within the set period of time may be:
1) At the initial time of the set period, identifying an emotion feature (hereinafter referred to as emotion feature a) of user category a, and recording the generation time t1 and initial intensity S1 of a by the terminal.
2) The terminal recognizes that the user generates the emotion feature B again, judges the relation between B and A, records B and the generation time and the initial intensity if B and A are in a negative influence relation, adjusts the intensity of A to be S1-deltas according to the intensity adjustment quantity, records B and the generation time and the initial intensity if B and A are in a non-influence relation, does not adjust the intensity of A, and records B and the generation time and the initial intensity if B and A are in a positive influence relation, and adjusts the intensity of A to be S1+deltas according to the intensity adjustment quantity. If the emotional characteristic generated by the secondary user is still A (namely, the emotional characteristic generated by the secondary user is still A), the intensity of A is directly increased according to the intensity adjustment quantity.
3) Repeating step 2) until reaching the end time point of the set period.
Optionally, if the terminal recognizes that the user generates the emotion feature B for more than the set period, it indicates that a has expired, and the terminal is not valid, and deletes a and records B and the corresponding parameters (generating time t2 and initial intensity S2).
And S150, determining the target emotion of the user in the set period according to the intensity of the second emotion characteristic.
In this embodiment, after the set period of time is over, at least one second emotion feature may be obtained in the set period of time, where each second emotion feature carries time, category and intensity information, and the target emotion in the set period of time may be determined according to the intensity information.
Specifically, the manner of determining the target emotion of the user in the set period according to the intensity of the second emotion feature may be: determining the emotion category corresponding to the second emotion feature with the highest intensity as a candidate emotion; judging whether the intensity of the candidate emotion is larger than or equal to a second set threshold value, and if so, determining the candidate emotion as a target emotion. If not, the target emotion is not detected within the set period.
In this embodiment, after reaching the ending time point of the set period, intensity detection is performed on the stored various emotional features, and if the intensity of the emotional feature a with the maximum intensity exceeds the second set threshold, the emotional feature stabilized by the user is considered as a in the period. If the emotion feature exceeding the threshold is not found, the emotion data in the period is discarded if no valid emotion of the user is generated in the period.
Optionally, after determining the target emotion of the user in the set period according to the intensity of the second emotion feature, the method further comprises the following steps: and sending the target emotion to a server, so that the server pushes resources related to the target emotion to the terminal equipment or performs operations related to the target emotion.
Resources may include, but are not limited to, themes, wallpaper, lock screens, music, games, and the like.
Illustratively, the following are examples of emotion recognition in the present embodiment:
the user initiates the emotion recognition application and agrees to terms such as privacy services. The emotion recognition application acquires the system time, 9 o' clock in the morning on 9 months 3 days, and the application judges the system time as the effective time and starts to acquire the user data. The method comprises the steps that at 12:00 noon in 3 days of 9 months, a theme application background service is started, voice of a user is recognized at 12:35, i expand wages today, the user is actually happy, at the moment, the recognized emotion is characterized as happy, and a record of happiness is added; 9-3-12:35; intensity: 80".
9 months 3 evening 20:08, at this time, in the background service operation, the detection of the user voice information "the way back" picks up 10 yuan money, the fortune is too good-! ", at this time, a record" surprise "is added; 9-3-20:08; intensity 20", while detecting the previous record, make a modification" happy; 9-3:35; intensity: 90", at which time two records of" happy "and" surprise "have been recorded in the database"
After the user opens a browser and searches for what medicine is uncomfortable to eat for bellies on 9 months and 4 days, and the user is judged to be in a state of uncomfortable body after the record is acquired, a record is added, and the 'low-fall' is 9-4-07:30; intensity 30", at the same time, two records in the database are detected, the first two records are modified, and the original two records become happy after adjustment because the original positive emotion characteristics of the user are weakened due to low emotion; 9-3-12:35; intensity: 70"," surprise; 9-3-20:08; intensity 10", three records" happy "," surprise "and" fall "were recorded in the database.
The method comprises the steps that 21:09 at night of 9 months and 4 days detects that voice information of a user is too excellent, discomfort is not caused in afternoon after taking a medicine, and the user can play back in the open, at the moment, the emotion characteristics of the user are judged to be happy, meanwhile, three records are detected in a database, the three records are adjusted according to preset weights, and the happy record is stored in the database, so that newly detected happy records are deleted, and the emotion characteristics after intensity adjustment are three and are changed into happy; 9-3-12:35; intensity: 80"," surprise; 9-3-20:08; strength 15"," low drop; 9-4-07:30; strength 20'
No emotional characteristics were detected for 9 months and 5 days.
The application detects that from the time 9-3-12:35 of the emotion characteristics of the first day, the preset time is 3 days, three emotion characteristics corresponding to different categories are arranged in the database, one of the biggest intensity is happy, the intensity value is larger than the threshold, and the stable emotion characteristics of the user in the period of time are judged to be happy.
The happy emotion features are sent to the server, the server receives the happy emotion features and then performs corresponding operations in the background, such as inquiring some proper resources and the like, and the operation results are pushed to the user.
The user receives the push message on the mobile phone as if you were very good in mood-! Try out our latest topic (or wallpaper, music) bar quickly-! "after the user clicks on the notification, he sees the pushed resources that are cheerful or happy. The operation is completed.
According to the technical scheme, first emotion characteristics of a user at the current moment in a set period are obtained; wherein the first emotional characteristic comprises: time, category, and intensity; adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature; judging whether the current moment is the ending time point of the set period; if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period; if so, determining the target emotion of the user in the set period according to the intensity of the second emotion feature. According to the emotion recognition method provided by the embodiment, in the set period, each time one emotion feature is acquired, the intensity of the acquired emotion feature is adjusted according to the newly acquired emotion feature until the ending time point of the set period is reached, so that the target emotion of the user in the set period is determined according to the intensity of each emotion feature, the stable emotion of the user in a period of time can be obtained, and the accuracy of emotion recognition can be improved.
Fig. 2 is a schematic structural diagram of an emotion recognition device provided in an embodiment of the present application, as shown in fig. 2, the device includes:
a first emotional characteristic obtaining module 210, configured to obtain a first emotional characteristic of the user at a current moment in a set period; wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters;
a second emotional characteristic adjustment module 220, configured to adjust, according to the first emotional characteristic, an intensity of a second emotional characteristic that has been obtained in the set period of time, and take the first emotional characteristic as a new second emotional characteristic;
a time determining module 230, configured to determine whether the current time is an end time point of the set period;
a return execution module 240, configured to acquire a next first emotional feature when the current time is not the ending time point of the set period, and return to perform an operation of adjusting the intensity of the second emotional feature acquired in the set period according to the first emotional feature until the ending time point of the set period;
and the target emotion determining module 250 is configured to determine a target emotion of the user in the set period according to the intensity of the second emotion feature when the current time is the ending time point of the set period.
Optionally, the first emotional characteristic obtaining module 210 is further configured to:
collecting any one or combination of voice signals of a user, music signals played by the user and search records; wherein the strength of the voice signal exceeds a first set threshold;
and obtaining the first emotion characteristics of the user at the current moment according to any one or combination of the voice signal, the music signal and the search record.
Optionally, the second emotional characteristic adjustment module 220 is further configured to:
acquiring an influence relation and an intensity adjustment amount between the first emotion feature and the second emotion feature acquired in the set period;
and adjusting the intensity of the second emotion feature obtained in the set period according to the influence relation and the intensity adjustment quantity.
Optionally, the influence relationship includes a positive influence relationship, a negative influence relationship and a no influence relationship; the second emotional characteristic adjustment module 220 is further configured to:
if the first emotion feature and the second emotion feature are in positive influence relation, accumulating the intensity of the second emotion feature by the adjustment quantity;
subtracting the adjustment amount from the intensity of the second emotional feature if the first emotional feature and the second emotional feature are in a negative influence relationship;
and if the first emotion feature and the second emotion feature are in an influence-free relation, keeping the intensity of the second emotion feature unchanged.
Optionally, the target emotion determining module 250 is further configured to:
determining the emotion category corresponding to the second emotion feature with the highest intensity as a candidate emotion;
judging whether the intensity of the candidate emotion is larger than or equal to a second set threshold value;
if yes, determining the candidate emotion as a target emotion; otherwise, the target emotion is not detected within the set period.
Optionally, the method further comprises: the resource pushing module is used for:
and sending the target emotion to a server, so that the server pushes resources related to the target emotion to a terminal device or performs operations related to the target emotion.
Optionally, if the category of the first emotional feature is different from the category of the obtained second emotional feature, taking the first emotional feature and the obtained second emotional feature as new second emotional features; otherwise, deleting the first emotion feature.
In one embodiment, fig. 3 is a schematic structural diagram of an emotion recognition device according to an embodiment of the present application. As shown in fig. 3, the apparatus provided in the present application includes: a processor 310 and a memory 320. The number of processors 310 in the device may be one or more, one processor 310 being illustrated in fig. 3. The number of memories 320 in the device may be one or more, one memory 320 being illustrated in fig. 3. The processor 310 and the memory 320 of the device may be connected by a bus or otherwise, for example in fig. 3. In an embodiment, the device is a computer device.
The memory 320, as a computer readable storage medium, may be configured to store a software program, a computer executable program, and modules, such as program instructions/modules (e.g., an encoding module and a first transmitting module in a data transmission apparatus) corresponding to the apparatus of any of the embodiments of the present application. Memory 320 may include a storage program area that may store an operating system, at least one application program required for functionality, and a storage data area; the storage data area may store data created according to the use of the device, etc. In addition, memory 320 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 320 may further include memory located remotely from processor 310, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The device provided by the above can be configured to perform the emotion recognition method provided by any of the above embodiments, and has corresponding functions and effects.
The program stored in the corresponding memory 320 may be program instructions/modules corresponding to the interrupt processing method provided in the embodiments of the present application, and the processor 310 executes one or more functions of the computer device and processes data by running the software program, instructions and modules stored in the memory 320, that is, implements the associated query method applied to data in the above method embodiments. It can be understood that when the device is a receiving end, the method for identifying emotion provided by any embodiment of the application can be executed and has corresponding functions and effects.
The present embodiments also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are for performing a data processing method comprising: acquiring a first emotion feature of a user at the current moment in a set period; wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters; adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature; judging whether the current moment is the ending time point of the set period; if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period; if yes, determining the target emotion of the user in the set period according to the intensity of the second emotion feature.
It will be appreciated by those skilled in the art that the term user equipment encompasses any suitable type of wireless user equipment, such as mobile telephones, portable data processing devices, portable web browsers, or car-mounted mobile stations.
In general, the various embodiments of the application may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the application is not limited thereto.
Embodiments of the present application may be implemented by a data processor of a mobile device executing computer program instructions, e.g. in a processor entity, either in hardware, or in a combination of software and hardware. The computer program instructions may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages.
The block diagrams of any logic flow in the figures of this application may represent program steps, or may represent interconnected logic circuits, modules, and functions, or may represent a combination of program steps and logic circuits, modules, and functions. The computer program may be stored on a memory. The Memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as, but not limited to, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), optical Memory devices and systems (digital versatile Disk (Digital Video Disc, DVD) or Compact Disk (CD)), and the like. The computer readable medium may include a non-transitory storage medium. The data processor may be of any type suitable to the local technical environment, such as, but not limited to, general purpose computers, special purpose computers, microprocessors, digital signal processors (Digital Signal Processing, DSPs), application specific integrated circuits (Application Specific Integrated Circuit, ASICs), programmable logic devices (Field-Programmable Gate Array, FGPA), and processors based on a multi-core processor architecture.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application.
Embodiments of the present application may be implemented by a data processor of a mobile device executing computer program instructions, e.g. in a processor entity, either in hardware, or in a combination of software and hardware. The computer program instructions may be assembly instructions, instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or destination code written in any combination of one or more programming languages.
By way of exemplary and non-limiting example, a detailed description of exemplary embodiments of the present application has been provided above. Various modifications and adaptations to the above embodiments may become apparent to those skilled in the art without departing from the scope of the invention, which is defined in the accompanying drawings and claims. Accordingly, the proper scope of the invention is to be determined according to the claims.

Claims (10)

1. A method of emotion recognition, comprising:
acquiring a first emotion feature of a user at the current moment in a set period; wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters;
adjusting the intensity of the second emotion feature obtained in the set period according to the first emotion feature, and taking the first emotion feature as a new second emotion feature;
judging whether the current moment is the ending time point of the set period;
if not, acquiring the next first emotion feature, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period;
if yes, determining the target emotion of the user in the set period according to the intensity of the second emotion feature.
2. The method of claim 1, wherein obtaining the first emotional characteristic of the user at the current time during the set period of time comprises:
collecting any one or combination of voice signals of a user, music signals played by the user and search records;
and obtaining the first emotion characteristics of the user at the current moment according to any one or combination of the voice signal, the music signal and the search record.
3. The method of claim 1, wherein adjusting the intensity of the second emotional characteristic that has been obtained during the set period of time based on the first emotional characteristic comprises:
acquiring an influence relation and an intensity adjustment amount between the first emotion feature and the second emotion feature acquired in the set period;
and adjusting the intensity of the second emotion feature obtained in the set period according to the influence relation and the intensity adjustment quantity.
4. The method of claim 3, wherein the influencing relationships comprise a positive influencing relationship, a negative influencing relationship, and a no influencing relationship; adjusting the intensity of the second emotional characteristic obtained in the set period according to the influence relation and the intensity adjustment amount, including:
if the first emotion feature and the second emotion feature are in positive influence relation, accumulating the intensity of the second emotion feature by the adjustment quantity;
subtracting the adjustment amount from the intensity of the second emotional feature if the first emotional feature and the second emotional feature are in a negative influence relationship;
and if the first emotion feature and the second emotion feature are in an influence-free relation, keeping the intensity of the second emotion feature unchanged.
5. The method of claim 1, wherein determining a target emotion of the user within the set period of time based on the intensity of the second emotional characteristic comprises:
determining the emotion category corresponding to the second emotion feature with the highest intensity as a candidate emotion;
judging whether the intensity of the candidate emotion is larger than or equal to a second set threshold value;
if yes, determining the candidate emotion as a target emotion; otherwise, the target emotion is not detected within the set period.
6. The method of claim 1, further comprising, after determining a target emotion of the user within the set period of time from the intensity of the second emotional characteristic:
and sending the target emotion to a server, so that the server pushes resources related to the target emotion to a terminal device or performs operations related to the target emotion.
7. The method of claim 1, wherein regarding the first emotional characteristic as a new second emotional characteristic comprises:
if the category of the first emotion feature is different from the category of the obtained second emotion feature, taking the first emotion feature and the obtained second emotion feature as new second emotion features; otherwise, deleting the first emotion feature.
8. An emotion recognition device, characterized by comprising:
the first emotion feature acquisition module is used for acquiring first emotion features of the user at the current moment in a set period; wherein the first emotional characteristic comprises: time parameters, category parameters, and intensity parameters;
a second emotional characteristic adjustment module, configured to adjust, according to the first emotional characteristic, an intensity of a second emotional characteristic that has been obtained in the set period of time, and take the first emotional characteristic and the obtained second emotional characteristic as new second emotional characteristics;
the time judging module is used for judging whether the current time is the ending time point of the set time period or not;
the return execution module is used for acquiring a next first emotion feature when the current moment is not the ending time point of the set period, and returning to execute the operation of adjusting the intensity of the acquired second emotion feature in the set period according to the first emotion feature until the ending time point of the set period;
and the target emotion determining module is used for determining the target emotion of the user in the set period according to the intensity of the second emotion characteristic when the current moment is the ending time point of the set period.
9. An emotion recognition device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the emotion recognition method of any of claims 1-7 when the program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the emotion recognition method as claimed in any of claims 1-7.
CN202111299042.6A 2021-11-04 2021-11-04 Emotion recognition method, device, equipment and storage medium Pending CN116072102A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111299042.6A CN116072102A (en) 2021-11-04 2021-11-04 Emotion recognition method, device, equipment and storage medium
PCT/CN2022/108192 WO2023077883A1 (en) 2021-11-04 2022-07-27 Emotional recognition method and apparatus, and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111299042.6A CN116072102A (en) 2021-11-04 2021-11-04 Emotion recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116072102A true CN116072102A (en) 2023-05-05

Family

ID=86172129

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111299042.6A Pending CN116072102A (en) 2021-11-04 2021-11-04 Emotion recognition method, device, equipment and storage medium

Country Status (2)

Country Link
CN (1) CN116072102A (en)
WO (1) WO2023077883A1 (en)

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6467965B2 (en) * 2015-02-13 2019-02-13 オムロン株式会社 Emotion estimation device and emotion estimation method
JP2018166653A (en) * 2017-03-29 2018-11-01 アイシン精機株式会社 Mood determination device
CN110377726B (en) * 2019-06-05 2020-08-25 光控特斯联(上海)信息科技有限公司 Method and device for realizing emotion recognition of natural language text through artificial intelligence
CN110751381A (en) * 2019-09-30 2020-02-04 东南大学 Road rage vehicle risk assessment and prevention and control method
CN111199205B (en) * 2019-12-30 2023-10-31 科大讯飞股份有限公司 Vehicle-mounted voice interaction experience assessment method, device, equipment and storage medium
CN113128534A (en) * 2019-12-31 2021-07-16 北京中关村科金技术有限公司 Method, device and storage medium for emotion recognition
CN112418059B (en) * 2020-11-19 2024-01-05 哈尔滨华晟泛亚人力资源服务有限公司 Emotion recognition method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
WO2023077883A1 (en) 2023-05-11

Similar Documents

Publication Publication Date Title
CN110020104B (en) News processing method and device, storage medium and computer equipment
CN107578776B (en) Voice interaction awakening method and device and computer readable storage medium
CN111177338B (en) Context-based multi-round dialogue method
CN106302933B (en) Voice information processing method and terminal
CN105847447A (en) Message push method and device
CN107609047A (en) Using recommendation method, apparatus, mobile device and storage medium
CN109543117B (en) Service pushing method based on intelligent recommendation and terminal equipment
CN110688518A (en) Rhythm point determining method, device, equipment and storage medium
CN114647698A (en) Data synchronization method and device and computer storage medium
CN112017663A (en) Voice generalization method and device and computer storage medium
CN111641554B (en) Message processing method and device and computer readable storage medium
CN116072102A (en) Emotion recognition method, device, equipment and storage medium
CN111739515B (en) Speech recognition method, equipment, electronic equipment, server and related system
CN110502631B (en) Input information response method and device, computer equipment and storage medium
CN108304263A (en) A kind of operating system background process checking and killing method, system and storage device
CN106933615B (en) APP awakening method and device
CN115955332A (en) Abnormal traffic filtering method and device for authentication system and electronic equipment
CN110111816B (en) Audio recording method, audio processing method, electronic equipment and server
CN111429920B (en) User distinguishing method, user behavior library determining method, device and equipment
CN111028830B (en) Local hot word bank updating method, device and equipment
CN108831473B (en) Audio processing method and device
CN112151028A (en) Voice recognition method and device
US10803861B2 (en) Method and apparatus for identifying information
JP6881212B2 (en) Telephone device
CN112687293B (en) Intelligent agent training method and system based on machine learning and data mining

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication