CN114359813A - Depression emotion detection method and device - Google Patents

Depression emotion detection method and device Download PDF

Info

Publication number
CN114359813A
CN114359813A CN202210031565.0A CN202210031565A CN114359813A CN 114359813 A CN114359813 A CN 114359813A CN 202210031565 A CN202210031565 A CN 202210031565A CN 114359813 A CN114359813 A CN 114359813A
Authority
CN
China
Prior art keywords
user
depression
information
video
historical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210031565.0A
Other languages
Chinese (zh)
Inventor
杨月乔
买晓琴
陈洁茹
魏昊冰
薛烨
陈开�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Renmin University of China
Original Assignee
Renmin University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Renmin University of China filed Critical Renmin University of China
Priority to CN202210031565.0A priority Critical patent/CN114359813A/en
Publication of CN114359813A publication Critical patent/CN114359813A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a method and a device for detecting depressed emotion, wherein the method comprises the following steps: in response to a depressed mood detection instruction for a user, obtaining video viewing information of the user; the video watching information comprises user information, user browsing data, each target video data and each character information; extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user. By applying the method provided by the embodiment of the invention, the depression emotion of the user can be identified through multi-dimensional information such as user information, user browsing data, target video data, character information and the like, and the depression emotion detection result of the user can be quickly and accurately obtained.

Description

Depression emotion detection method and device
Technical Field
The invention relates to the technical field of data processing, in particular to a method and a device for detecting depressed emotion.
Background
With the development of various network platforms, people can release emotions on the various network platforms, various information on the network platforms can also be used as psychological illness diagnosis indexes to be tracked and observed, and the depressed emotion recognition is realized through an automatic depression detection technology.
However, the existing depression emotion recognition method generally simply performs classification recognition based on the content published by the user as a recognition basis, and the recognition basis and the recognition mode are both single, so that the problem of low recognition accuracy is easily caused.
Disclosure of Invention
The invention aims to provide a method for detecting the depressed emotion, which can accurately obtain the result of detecting the depressed emotion of a user.
The invention also provides a depression emotion detection device used for ensuring the realization and application of the method in practice.
A method of depressed mood detection, comprising:
in response to a depressed mood detection instruction for a user, obtaining video viewing information for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
Optionally, in the method, the extracting features of the user information, the user browsing data, each target video data, and each text information, respectively, to obtain the depression emotion feature of the user includes:
vectorizing the user information and the user browsing data respectively to obtain a first feature corresponding to the user information and a second feature corresponding to the user browsing data;
classifying each target video data by using a pre-trained video classification model to obtain a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score;
processing each character information by using a pre-trained word vector model to obtain a character depression mood score of each character information, and determining a fourth feature corresponding to each character information according to the character depression mood score of each character information;
composing a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
The above method, optionally, may be a process of training the video classification model, including:
acquiring a target training video in video watching information of each historical user; the target training video data is video data meeting preset browsing conditions in all historical video data browsed by the historical user;
measuring a first depression score label in each target training video based on a preset tested depression scale;
training a preset initial video classification model according to the target training videos and the first depression score labels of the target training videos until the initial video classification model meets a preset first training completion condition;
and determining the initial video classification model meeting the first training completion condition as a trained video classification model.
The method, optionally, may be a process of training the word vector model, including:
acquiring each historical word information in the video watching information of each historical user, wherein the historical word information is at least one of comment content and search content input by the historical user;
measuring a second depression score label of each historical text message based on a preset tested depression scale;
training a preset initial word vector model according to the historical character information and a second depression score label of each historical character information until the initial word vector model meets a preset second training completion condition;
and determining the initial word vector model meeting the second training completion condition as a trained word vector model.
The method, optionally, the process of constructing the depression emotion detection model includes:
acquiring a training sample set; the training sample set comprises video watching information of each historical user;
performing feature extraction on the video watching information of each historical user in the training sample set to obtain depression emotion features of each historical user;
and constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
The method described above, optionally, after obtaining the depression emotion detection result of the user, further includes:
determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
outputting depression warning information for the user in a case where the user is in a depressed state.
A depressed mood detection device, comprising:
an acquisition unit configured to acquire video viewing information of a user in response to a depression emotion detection instruction for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
the characteristic extraction unit is used for carrying out characteristic extraction on the user information, the user browsing data, each target video data and each character information to obtain depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
and the detection unit is used for inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
The above apparatus, optionally, the detection unit includes:
the acquisition subunit is used for acquiring a training sample set; the training sample set comprises video watching information of each historical user;
the characteristic extraction subunit is used for carrying out characteristic extraction on the video watching information of each historical user in the training sample set to obtain the depression emotion characteristic of each historical user;
and the construction subunit is used for constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
The above apparatus, optionally, the feature extraction unit includes:
the first execution subunit is configured to perform vectorization processing on the user information and the user browsing data respectively to obtain a first feature corresponding to the user information and a second feature corresponding to the user browsing data;
the second execution subunit is used for classifying each target video data by using a pre-trained video classification model, obtaining a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score;
the third execution subunit is used for processing each character information by using a pre-trained word vector model, obtaining a character depression mood score of each character information, and determining a fourth feature corresponding to each character information according to the character depression mood score of each character information;
a fourth execution subunit, configured to compose a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
The above apparatus, optionally, further comprises:
a determining unit for determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
an output unit for outputting depression warning information for the user in a case where the user is in a depressed state.
A storage medium comprising stored instructions, wherein the instructions, when executed, control a device on which the storage medium is located to perform a method of depression mood detection as described above.
An electronic device comprising a memory, and one or more instructions, wherein the one or more instructions are stored in the memory and configured to be executed by one or more processors to perform a method of depression mood detection as described above.
Compared with the prior art, the invention has the following advantages:
the invention provides a method and a device for detecting depressed emotion, a storage medium and electronic equipment, wherein the method comprises the following steps: in response to a depressed mood detection instruction for a user, obtaining video viewing information for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user; extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information; inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user. By applying the method provided by the embodiment of the invention, the depression emotion of the user can be identified through multi-dimensional information such as user information, user browsing data, target video data, character information and the like, and the depression emotion detection result of the user can be quickly and accurately obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart of a method for detecting depressed mood provided by the present invention;
FIG. 2 is a flow chart of a process provided by the present invention for obtaining a depressed mood characteristic of a user;
FIG. 3 is a flow chart of a process for depressed mood detection for a user provided by the present invention;
FIG. 4 is a schematic structural diagram of a depressed emotion detection apparatus provided in the present invention;
fig. 5 is a schematic structural diagram of an electronic device provided in the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
At present, with the rapid development of various network platforms, especially short video platforms, a large number of active users are provided, and when a user has a depression or other emotion, the short video can be brushed on the short video platform to express the emotion.
Based on this, an embodiment of the present invention provides a method for detecting a depressed mood, where the method may be applied to an electronic device, and a flowchart of the method is shown in fig. 1, and specifically includes:
s101: in response to a depressed mood detection instruction for a user, obtaining video viewing information for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user.
In this embodiment, the target video data is video data that meets a preset browsing condition in each historical video data browsed by the user, the browsing condition may be that the video data is subjected to one or more operations of approval, attention, forwarding, broadcasting, commenting, sending, and the like of the user, the video data may be short video data, and the broadcasting duration of the short video data is less than a preset duration threshold.
Alternatively, the user information may include the age, occupation, sex, size of the city in which the user is located, and the like of the user.
Optionally, the browsing data of the user may be a middle value of browsing time of the user in each time period, a browsing duration in each time period, and the like.
Optionally, the text information input by the user may be search content, comment content and the like of the user.
S102: extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information.
In this embodiment, the feature extraction manner for the user information and the user browsing data may be that the user information and the user browsing data are directly vectorized, so as to obtain a first feature corresponding to the user information and a second feature of the user browsing data.
Alternatively, the third feature may be a statistical feature of the video depression score of each target video data, and the fourth feature may be a statistical feature of the text depression score of each text message.
S103: inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
In the present embodiment, the depression emotion detection model may be various types of machine learning models, for example, a LightGBM model.
Optionally, the depression emotion detection result may represent a depression degree score of the user, and after the depression emotion detection result of the user is obtained, the depression degree score represented by the depression emotion detection result may be displayed on a preset display interface.
By applying the method provided by the embodiment of the invention, the depression emotion of the user can be identified through multi-dimensional information such as user information, user browsing data, target video data, character information and the like, and the depression emotion detection result of the user can be quickly and accurately obtained.
In this embodiment of the present invention, based on the above real-time process, optionally, the process of respectively performing feature extraction on the user information, the user browsing data, each target video data, and each text information to obtain the depression emotion feature of the user may include, as shown in fig. 2:
s201: and vectorizing the user information and the user browsing data respectively to obtain a first characteristic corresponding to the user information and a second characteristic corresponding to the user browsing data.
In the embodiment of the present invention, the user information may include the age, occupation, gender, and scale of a city where the user is located, and the user information may be directly vectorized to obtain the first feature corresponding to the user information.
Optionally, the user browsing data may be a middle value of browsing time of the user in each time period, browsing duration of each time period, and the like, and the user information may be directly subjected to vectorization processing to obtain a second feature corresponding to the user browsing data.
S202: classifying each target video data by using a pre-trained video classification model to obtain a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score.
In the embodiment of the invention, the video classification model can be a Vision transform model, after each target video data is input into the video classification model, the video depression mood score of each target video data can be obtained, and a plurality of second statistical data such as the mean value, the variance, the median, the maximum value and the like of each video depression mood score can be calculated; and forming each statistical data into a third characteristic of each target video data.
S203: processing each character information by using a pre-trained word vector model to obtain a character depression mood score of each character information, and determining a fourth characteristic corresponding to each character information according to the character depression mood score of each character information.
In the embodiment of the invention, the word vector model can be a BERT model, after each target video data is input into the word vector model, the character depressed emotion score of each character information can be obtained, and a plurality of second statistical data such as the average value, the variance, the median, the maximum value and the like of each character depressed emotion score can be calculated; and composing each of the second statistical data into a fourth feature of each of the target video data.
S204: composing a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
In the embodiment of the invention, the first feature, the second feature, the third feature and the fourth feature can be combined into the depression mood feature of the user in a preset combination mode.
In the embodiment of the present invention, based on the real-time process, optionally, the process of training the video classification model includes:
acquiring a target training video in video watching information of each historical user; the target training video data is video data meeting preset browsing conditions in all historical video data browsed by the historical user;
measuring a first depression score label in each target training video based on a preset tested depression scale;
training a preset initial video classification model according to the target training videos and the first depression score labels of the target training videos until the initial video classification model meets a preset first training completion condition;
and determining the initial video classification model meeting the first training completion condition as a trained video classification model.
In this embodiment, the browsing condition may be one of approval, attention, forwarding, broadcasting, commenting, and sending of the video data by the historical user.
Optionally, the initial video classification model may be an initial Vision transform model, the initial video classification model is trained based on each target training video and the first depression score label of each target training video, and model parameters of the initial video classification model may be updated, so as to implement training of the initial video classification model, and the first training completion condition may be that the prediction accuracy of the initial video classification model is greater than a preset accuracy threshold, or the training times are greater than a preset training time threshold, and the like.
In the embodiment of the present invention, based on the implementation process, optionally, the process of training the word vector model includes:
acquiring each historical word information in the video watching information of each historical user, wherein the historical word information is at least one of comment content and search content input by the historical user;
measuring a second depression score label of each historical text message based on a preset tested depression scale;
training a preset initial word vector model according to the historical character information and a second depression score label of each historical character information until the initial word vector model meets a preset second training completion condition;
and determining the initial word vector model meeting the second training completion condition as a trained word vector model.
In this embodiment, the initial word vector model may be an initial BERT model, the initial word vector model is trained based on each piece of historical text information and the second depression score label of each piece of historical text information, and model parameters of the initial word vector model may be updated, so as to implement training of the initial word vector model, and the second training completion condition may be that the prediction accuracy of the initial word vector model is greater than a preset accuracy threshold, or the training times are greater than a preset training time threshold, and the like.
In the embodiment of the present invention, based on the implementation process, optionally, the process of constructing the depression emotion detection model includes:
acquiring a training sample set; the training sample set comprises video watching information of each historical user;
performing feature extraction on the video watching information of each historical user in the training sample set to obtain depression emotion features of each historical user;
and constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
In this embodiment, the training sample set includes video viewing information of each historical user, the video viewing information of the historical user may include user information of each historical user, user browsing data, target training video data, and historical text information input by the user, and the depression emotion detection model may be a LightGBM model.
In the embodiment of the present invention, based on the implementation process described above, optionally, after obtaining the depression emotion detection result of the user, the method further includes:
determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
outputting depression warning information for the user in a case where the user is in a depressed state.
In this embodiment, the depression degree score represented by the depression mood detection result may be compared with a preset depression degree score threshold, and if the depression degree score is greater than the depression degree score threshold, it may be determined that the user is in a depression state.
The depression emotion detection method provided by the embodiment of the invention can be applied to a short video platform, and a user can input search contents in the short video platform to search video data which the user is interested in, browse the short video data, and carry out operations such as praise, attention, forwarding, broadcasting, commenting and sending on the short video data. When the user needs to be subjected to depression emotion detection, a depression emotion detection instruction of the user can be obtained, and video watching information of the user can be obtained; the video viewing information includes user information, user browsing data, target video data, and text information input by the user, and after the video viewing information of the user is obtained, referring to fig. 3, a flowchart of a process of detecting a depression emotion of the user is provided in an embodiment of the present invention, which is specifically as follows:
s301: extracting and processing short-video depression mood characteristic data, wherein user information, user browsing data, each target video data and each character information can be subjected to characteristic extraction to obtain depression mood characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information.
S302: and detecting the depressed emotion of the short video platform user, wherein the depressed emotion characteristics of the user can be input into a pre-constructed depressed emotion detection model, and the depressed emotion detection result of the user is obtained.
The short-video depression emotion characteristic data extraction and processing method adopts different extraction and processing methods to extract depression emotion characteristic data aiming at various information of a short-video platform user, and is used for machine learning training data and depression emotion recognition.
According to the method for detecting the depressed emotion of the user on the short video platform, after training is carried out on the basis of collected data, intelligent identification and scoring of the depressed emotion of the user are achieved on the basis of behaviors and operations of the user on the short video platform.
In this embodiment, personal basic information and browsing data information are directly converted into training feature data, video depression degree scoring method based on Vision Transformer video classification model is adopted for converting video data information into scoring data, and text depression degree scoring method based on BERT is adopted for converting text data information into scoring data.
Optionally, in the process of scoring the degree of depression of the video based on the Vision Transformer video classification model, depression scores are measured according to a tested depression scale, depression mood scoring is performed on the short videos which are watched, paid attention, forwarded, broadcast, commented and sent, the labeled data are trained based on the Vision Transformer video classification model and the depression mood scoring of the short videos, and finally the depression mood is represented by statistical data of the depression mood scoring of a plurality of videos of the user.
In some embodiments, the Vision transform-based video depressive degree score may be: the short-video depression emotion mark based on scale measurement and the short-video depression emotion perception based on Vision Transformer can be specifically characterized in that the short-video depression emotion mark based on scale measurement measures depression degree scores by carrying out depression scale on a tested object, and marks with different scores for carrying out depression emotion are voted, concerned, forwarded, broadcast, commented and sent; vision Transformer-based short video Depression mood perception the video mood was understood based on the self-attention mechanism of the Transformer technology, and the degree of depression was scored based on training data.
Optionally, the text depression degree scoring method based on BERT includes that depression scores are measured according to a tested depression scale, depression emotion scoring is carried out on texts input in comment and search, depression emotion scoring of text information is carried out based on BERT after training, and depression emotion is represented by statistical data of multi-segment text depression emotion scoring of a user.
In some embodiments, the BERT-based word depression degree scoring method comprises searching for a word depression degree score and commenting on a word depression degree score, wherein for the depression degree score of a search word, a search chinese word depression mood score can be achieved based on BERT, and depression mood scores of all search words of a user are measured based on a tested depression scale; for the depressed degree score of the comment text, a comment Chinese text depressed mood score can be achieved based on BERT, and the depressed mood score of all comment texts of the user is measured based on the tested depression scale.
In some embodiments, four types of data, namely personal basic information (age, occupation, gender, city scale), browsing data information (daily browsing time median, daily browsing duration), video data information (statistical features of video depression degree scores of sending/liking/broadcasting/commenting/forwarding/paying attention), text data information (statistical features of text depression degree scores of commenting/searching contents), are collected, a model is built based on the LightGBM method, data training is carried out, the model is used for detecting the depression mood of the user, and the score of the depression degree is given.
By applying the method provided by the embodiment of the invention, the depression emotion detection of the user can be realized, fewer samples can be used for understanding the video emotion based on the Vision Transformer, and fewer samples can be used for understanding the character emotion based on the BERT. The method is based on the LightGBM, and the depression emotion of the user is identified according to multiple dimensions of browsing data, personal information, video data and character data of the user, so that the detection can be effectively, accurately and quickly carried out.
Corresponding to the method illustrated in fig. 1, an embodiment of the present invention further provides a device for detecting a depressed mood, which is used for implementing the method illustrated in fig. 1 specifically, and the device for detecting a depressed mood provided in the embodiment of the present invention may be applied to an electronic device, and a schematic structural diagram of the device is illustrated in fig. 4, and specifically includes:
an obtaining unit 401, configured to obtain video viewing information of a user in response to a depression emotion detection instruction for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
a feature extraction unit 402, configured to perform feature extraction on the user information, the user browsing data, each target video data, and each text information, to obtain a depression emotion feature of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
a detecting unit 403, configured to input the feature of the user's depression mood into a pre-constructed depression mood detection model, so as to obtain a depression mood detection result of the user.
In an embodiment provided by the present invention, based on the above scheme, optionally, the detecting unit 403 includes:
the acquisition subunit is used for acquiring a training sample set; the training sample set comprises video watching information of each historical user;
the characteristic extraction subunit is used for carrying out characteristic extraction on the video watching information of each historical user in the training sample set to obtain the depression emotion characteristic of each historical user;
and the construction subunit is used for constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
In an embodiment of the present invention, based on the above scheme, optionally, the feature extraction unit 402 includes:
the first execution subunit is configured to perform vectorization processing on the user information and the user browsing data respectively to obtain a first feature corresponding to the user information and a second feature corresponding to the user browsing data;
the second execution subunit is used for classifying each target video data by using a pre-trained video classification model, obtaining a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score;
the third execution subunit is used for processing each character information by using a pre-trained word vector model, obtaining a character depression mood score of each character information, and determining a fourth feature corresponding to each character information according to the character depression mood score of each character information;
a fourth execution subunit, configured to compose a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
In an embodiment provided by the present invention, based on the above scheme, optionally, the second execution subunit is configured to:
acquiring a target training video in video watching information of each historical user; the target training video data is video data meeting preset browsing conditions in all historical video data browsed by the historical user;
measuring a first depression score label in each target training video based on a preset tested depression scale;
training a preset initial video classification model according to the target training videos and the first depression score labels of the target training videos until the initial video classification model meets a preset first training completion condition;
and determining the initial video classification model meeting the first training completion condition as a trained video classification model.
In an embodiment provided by the present invention, based on the above scheme, optionally, the third execution subunit is configured to:
acquiring each historical word information in the video watching information of each historical user, wherein the historical word information is at least one of comment content and search content input by the historical user;
measuring a second depression score label of each historical text message based on a preset tested depression scale;
training a preset initial word vector model according to the historical character information and a second depression score label of each historical character information until the initial word vector model meets a preset second training completion condition;
and determining the initial word vector model meeting the second training completion condition as a trained word vector model.
In an embodiment provided by the present invention, based on the above scheme, optionally, the depressed mood detecting device further includes:
a determining unit for determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
an output unit for outputting depression warning information for the user in a case where the user is in a depressed state.
The specific principle and the implementation process of each unit and module in the device for detecting a depressed emotion disclosed in the embodiment of the present invention are the same as those of the method for detecting a depressed emotion disclosed in the embodiment of the present invention, and reference may be made to corresponding parts in the method for detecting a depressed emotion provided in the embodiment of the present invention, which are not described herein again.
The embodiment of the invention also provides a storage medium which comprises stored instructions, wherein when the instructions are executed, the equipment where the storage medium is located is controlled to execute the depression emotion detection method.
An electronic device is provided in an embodiment of the present invention, and the structural diagram of the electronic device is shown in fig. 5, which specifically includes a memory 501 and one or more instructions 502, where the one or more instructions 502 are stored in the memory 501, and are configured to be executed by one or more processors 503 to perform the following operations according to the one or more instructions 502:
in response to a depressed mood detection instruction for a user, obtaining video viewing information for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in a plurality of software and/or hardware when implementing the invention.
From the above description of the embodiments, it is clear to those skilled in the art that the present invention can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments.
The method for detecting depressed mood provided by the invention is described in detail above, and the principle and the implementation mode of the invention are explained by applying specific examples, and the description of the examples is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for detecting depressed mood, comprising:
in response to a depressed mood detection instruction for a user, obtaining video viewing information for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
extracting the characteristics of the user information, the user browsing data, each target video data and each character information to obtain the depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
2. The method according to claim 1, wherein the performing feature extraction on the user information, user browsing data, each target video data and each text information to obtain the depression emotion feature of the user comprises:
vectorizing the user information and the user browsing data respectively to obtain a first feature corresponding to the user information and a second feature corresponding to the user browsing data;
classifying each target video data by using a pre-trained video classification model to obtain a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score;
processing each character information by using a pre-trained word vector model to obtain a character depression mood score of each character information, and determining a fourth feature corresponding to each character information according to the character depression mood score of each character information;
composing a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
3. The method of claim 2, wherein the process of training the video classification model comprises:
acquiring a target training video in video watching information of each historical user; the target training video data is video data meeting preset browsing conditions in all historical video data browsed by the historical user;
measuring a first depression score label in each target training video based on a preset tested depression scale;
training a preset initial video classification model according to the target training videos and the first depression score labels of the target training videos until the initial video classification model meets a preset first training completion condition;
and determining the initial video classification model meeting the first training completion condition as a trained video classification model.
4. The method of claim 1, wherein the process of training the word vector model comprises:
acquiring each historical word information in the video watching information of each historical user, wherein the historical word information is at least one of comment content and search content input by the historical user;
measuring a second depression score label of each historical text message based on a preset tested depression scale;
training a preset initial word vector model according to the historical character information and a second depression score label of each historical character information until the initial word vector model meets a preset second training completion condition;
and determining the initial word vector model meeting the second training completion condition as a trained word vector model.
5. The method of claim 1, wherein the process of constructing the model for detecting depressed mood comprises:
acquiring a training sample set; the training sample set comprises video watching information of each historical user;
performing feature extraction on the video watching information of each historical user in the training sample set to obtain depression emotion features of each historical user;
and constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
6. The method of claim 1, wherein after obtaining the detection result of the user's depressed mood, further comprising:
determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
outputting depression warning information for the user in a case where the user is in a depressed state.
7. A depressed mood detecting device characterized by comprising:
an acquisition unit configured to acquire video viewing information of a user in response to a depression emotion detection instruction for the user; the video watching information comprises user information, user browsing data, each target video data and each character information input by the user;
the characteristic extraction unit is used for carrying out characteristic extraction on the user information, the user browsing data, each target video data and each character information to obtain depression emotion characteristics of the user; the depressed mood features comprise a first feature corresponding to the user information, a second feature corresponding to the user browsing data, a third feature corresponding to each target video data and a fourth feature corresponding to the text information;
and the detection unit is used for inputting the depression emotion characteristics of the user into a pre-constructed depression emotion detection model to obtain a depression emotion detection result of the user.
8. The apparatus of claim 7, wherein the detection unit comprises:
the acquisition subunit is used for acquiring a training sample set; the training sample set comprises video watching information of each historical user;
the characteristic extraction subunit is used for carrying out characteristic extraction on the video watching information of each historical user in the training sample set to obtain the depression emotion characteristic of each historical user;
and the construction subunit is used for constructing a depression emotion detection model based on a preset LightGBM algorithm and depression emotion characteristics of the historical users.
9. The apparatus of claim 7, wherein the feature extraction unit comprises:
the first execution subunit is configured to perform vectorization processing on the user information and the user browsing data respectively to obtain a first feature corresponding to the user information and a second feature corresponding to the user browsing data;
the second execution subunit is used for classifying each target video data by using a pre-trained video classification model, obtaining a video depression mood score of each target video data, and determining a third feature corresponding to each target video data according to each video depression mood score;
the third execution subunit is used for processing each character information by using a pre-trained word vector model, obtaining a character depression mood score of each character information, and determining a fourth feature corresponding to each character information according to the character depression mood score of each character information;
a fourth execution subunit, configured to compose a depressed mood characteristic of the user from the first characteristic, the second characteristic, the third characteristic, and the fourth characteristic.
10. The apparatus of claim 7, further comprising:
a determining unit for determining whether the user is in a depressed state based on a depressed degree score characterized by the depressed mood detection result;
an output unit for outputting depression warning information for the user in a case where the user is in a depressed state.
CN202210031565.0A 2022-01-12 2022-01-12 Depression emotion detection method and device Pending CN114359813A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210031565.0A CN114359813A (en) 2022-01-12 2022-01-12 Depression emotion detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210031565.0A CN114359813A (en) 2022-01-12 2022-01-12 Depression emotion detection method and device

Publications (1)

Publication Number Publication Date
CN114359813A true CN114359813A (en) 2022-04-15

Family

ID=81109155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210031565.0A Pending CN114359813A (en) 2022-01-12 2022-01-12 Depression emotion detection method and device

Country Status (1)

Country Link
CN (1) CN114359813A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115565643A (en) * 2022-10-14 2023-01-03 杭州中暖科技有限公司 Grading management system for family education and mental health education

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115565643A (en) * 2022-10-14 2023-01-03 杭州中暖科技有限公司 Grading management system for family education and mental health education

Similar Documents

Publication Publication Date Title
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN109874029B (en) Video description generation method, device, equipment and storage medium
CN110163647B (en) Data processing method and device
CN110263248B (en) Information pushing method, device, storage medium and server
WO2016085409A1 (en) A method and system for sentiment classification and emotion classification
CN113505264B (en) Method and system for recommending video
CN109492221B (en) Information reply method based on semantic analysis and wearable equipment
CN108009297B (en) Text emotion analysis method and system based on natural language processing
KR102407057B1 (en) Systems and methods for analyzing the public data of SNS user channel and providing influence report
JP2020052463A (en) Information processing method and information processing apparatus
CN110955750A (en) Combined identification method and device for comment area and emotion polarity, and electronic equipment
CN114783421A (en) Intelligent recommendation method and device, equipment and medium
CN113315988B (en) Live video recommendation method and device
CN111931073B (en) Content pushing method and device, electronic equipment and computer readable medium
Haque et al. Opinion mining from bangla and phonetic bangla reviews using vectorization methods
CN111242710A (en) Business classification processing method and device, service platform and storage medium
CN114359813A (en) Depression emotion detection method and device
CN110852071A (en) Knowledge point detection method, device, equipment and readable storage medium
JP2020140692A (en) Sentence extracting system, sentence extracting method, and program
CN113658690A (en) Intelligent medical guide method and device, storage medium and electronic equipment
CN113641837A (en) Display method and related equipment thereof
CN112417210A (en) Body-building video query method, device, terminal and storage medium
CN117349515A (en) Search processing method, electronic device and storage medium
CN111933107A (en) Speech recognition method, speech recognition device, storage medium and processor
US8666987B2 (en) Apparatus and method for processing documents to extract expressions and descriptions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination