CN112784606A - Method and related device for determining user attribute information - Google Patents

Method and related device for determining user attribute information Download PDF

Info

Publication number
CN112784606A
CN112784606A CN202110055642.1A CN202110055642A CN112784606A CN 112784606 A CN112784606 A CN 112784606A CN 202110055642 A CN202110055642 A CN 202110055642A CN 112784606 A CN112784606 A CN 112784606A
Authority
CN
China
Prior art keywords
user attribute
information
input information
attribute information
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110055642.1A
Other languages
Chinese (zh)
Inventor
叶祺
李正宇
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sogou Technology Development Co Ltd
Original Assignee
Beijing Sogou Technology Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sogou Technology Development Co Ltd filed Critical Beijing Sogou Technology Development Co Ltd
Priority to CN202110055642.1A priority Critical patent/CN112784606A/en
Publication of CN112784606A publication Critical patent/CN112784606A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/216Parsing using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination

Abstract

The application discloses a method and a related device for determining user attribute information, wherein the method comprises the following steps: acquiring input information of at least two different modalities of a user; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. Therefore, in the input scene, the collection of the input information of at least two different modes of the user is easier; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.

Description

Method and related device for determining user attribute information
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and a related apparatus for determining user attribute information.
Background
With the rapid development of artificial intelligence, various artificial intelligence products are continuously released, and people are full of imagination and expectation on artificial intelligence. In order to make artificial intelligence better understand users so as to more intelligently serve different users and improve user experience, determining user attribute information of a user is particularly important for artificial intelligence.
Generally, a plurality of channels collect user data of a single modality, and analyze the user data of the single modality to obtain user attribute information, such as a population attribute such as a region, an age, and a gender, and a product behavior attribute such as a product category, an activity frequency, and a product preference.
However, the inventor finds that multi-channel collection of user data of a single modality is difficult; and the user attribute information is obtained only by means of single-mode user data analysis, so that the problem of single analysis basis exists, and the obtained user attribute information is easy to be inaccurate.
Disclosure of Invention
In view of this, the present application provides a method and a related apparatus for determining user attribute information, so as to avoid the problem of a single analysis basis and greatly improve the accuracy of determining user attribute information.
In a first aspect, an embodiment of the present application provides a method for determining user attribute information, where the method includes:
applied to an input scene, comprising:
acquiring input information of at least two different modalities of a user;
respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and comprehensively judging the at least two kinds of user attribute information to determine the target user attribute information of the user.
Optionally, the input information of the at least two different modalities includes at least two of text input information, voice input information, and image input information.
Optionally, when the input information of the at least two different modalities includes first text input information, the performing user attribute identification on the input information of the at least two different modalities respectively to obtain corresponding at least two user attribute information includes:
performing semantic extraction processing on the first text input information to obtain first semantic information related to user attributes in the first text input information;
and carrying out user attribute identification on the first semantic information by utilizing a preset text user attribute identifier to obtain first user attribute information.
Optionally, when the input information of the at least two different modalities includes the first speech input information, the performing, respectively, user attribute identification on the input information of the at least two different modalities to obtain corresponding at least two user attribute information includes:
converting the first voice input information into second text input information;
performing semantic extraction processing on the second text input information to obtain second semantic information related to user attributes in the second text input information;
performing user attribute identification on the second semantic information by using a preset text user attribute identifier to obtain second user attribute information;
performing voice feature extraction processing based on the first voice input information to obtain corresponding voice features;
and carrying out user attribute recognition on the voice features by utilizing a preset voice user attribute recognizer to obtain third user attribute information.
Optionally, the performing, based on the first voice input information, voice feature extraction processing to obtain a corresponding voice feature includes:
preprocessing the first voice input information to obtain second voice input information;
and performing voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
Optionally, when the input information of the at least two different modalities includes the image input information, the performing user attribute identification on the input information of the at least two different modalities respectively to obtain corresponding at least two user attribute information includes:
preprocessing the image input information to obtain image information in a preset format;
and carrying out user attribute identification on the image information by utilizing a preset image user attribute identifier to obtain fourth user attribute information.
Optionally, the comprehensively determining the at least two types of user attribute information to determine the target user attribute information of the user includes:
obtaining the probability of each user attribute information by utilizing the preset comprehensive judger; the preset comprehensive judger is obtained by pre-training a deep learning network based on input information samples in different modes and corresponding user attribute information samples;
and if the probability of the user attribute information is greater than or equal to the preset probability, determining the user attribute information as the target user attribute information of the user.
Optionally, the user attribute information includes coarse-grained user attribute information and/or fine-grained user attribute information.
Optionally, the method further includes:
correspondingly storing the at least two kinds of user attribute information and the probability of each kind of user attribute information; and/or the presence of a gas in the gas,
and correspondingly storing the user and the target user attribute information.
In a second aspect, an embodiment of the present application provides an apparatus for determining user attribute information, where the apparatus is applied to an input scenario, and the apparatus includes:
the first obtaining unit is used for obtaining input information of at least two different modalities of a user;
the second obtaining unit is used for respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and the determining unit is used for comprehensively judging the at least two kinds of user attribute information and determining the target user attribute information of the user.
Optionally, the input information of the at least two different modalities includes at least two of text input information, voice input information, and image input information.
Optionally, when the input information of at least two different modalities includes the first text input information, the second obtaining unit includes:
the first obtaining subunit is used for performing semantic extraction processing on the first text input information to obtain first semantic information related to user attributes in the first text input information;
and the second obtaining subunit is used for carrying out user attribute identification on the first semantic information by using a preset text user attribute identifier to obtain first user attribute information.
Optionally, when the input information of at least two different modalities includes the first speech input information, the second obtaining unit includes:
the conversion subunit is used for converting the first voice input information into second text input information;
the third obtaining subunit is configured to perform semantic extraction processing on the second text input information, and obtain second semantic information related to the user attribute in the second text input information;
a fourth obtaining subunit, configured to perform user attribute identification on the second semantic information by using a preset text user attribute identifier, so as to obtain second user attribute information;
a fifth obtaining subunit, configured to perform speech feature extraction processing based on the first speech input information, so as to obtain a corresponding speech feature;
and the sixth obtaining subunit is configured to perform user attribute recognition on the voice features by using a preset voice user attribute recognizer, and obtain third user attribute information.
Optionally, the fifth obtaining subunit includes:
the first obtaining module is used for preprocessing the first voice input information to obtain second voice input information;
and the second obtaining module is used for carrying out voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
Optionally, when the input information of at least two different modalities includes image input information, the second obtaining unit includes:
the seventh obtaining subunit is configured to pre-process the image input information to obtain image information in a preset format;
and the eighth obtaining subunit is configured to perform user attribute identification on the image information by using a preset image user attribute identifier, and obtain fourth user attribute information.
Optionally, the determining unit includes:
a ninth obtaining subunit, configured to obtain a probability of each user attribute information by using a preset comprehensive judger; the preset comprehensive judger is obtained by pre-training a deep learning network based on input information samples in different modes and corresponding user attribute information samples;
and the determining subunit is used for determining the user attribute information as the target user attribute information of the user if the probability of the user attribute information is greater than or equal to the preset probability.
Optionally, the user attribute information includes coarse-grained user attribute information and/or fine-grained user attribute information.
Optionally, the apparatus further comprises:
the first storage unit is used for correspondingly storing at least two kinds of user attribute information and the probability of each kind of user attribute information; and/or the presence of a gas in the gas,
and the second storage unit is used for correspondingly storing the attribute information of the user and the target user.
In a third aspect, an embodiment of the present application provides an apparatus for determining user attribute information, which is applied to an input scenario, the apparatus including a memory, and one or more programs, where the one or more programs are stored in the memory, and configured to be executed by one or more processors, the one or more programs including instructions for:
acquiring input information of at least two different modalities of a user;
respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and comprehensively judging the at least two kinds of user attribute information to determine the target user attribute information of the user.
In a fourth aspect, embodiments of the present application provide a machine-readable medium having stored thereon instructions, which, when executed by one or more processors, cause an apparatus to perform the method of determining user attribute information as set forth in any one of the first aspects.
Compared with the prior art, the method has the advantages that:
by adopting the technical scheme of the embodiment of the application, the input information of at least two different modes of a user is obtained; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. Therefore, in the input scene, the collection of the input information of at least two different modes of the user is easier; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a system framework related to an application scenario in an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for determining user attribute information according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another method for determining user attribute information according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an apparatus for determining user attribute information according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an apparatus for determining user attribute information according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a server according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
At present, generally, multi-channel collection of user data of a single modality is performed, and the user data of the single modality is analyzed to obtain user attribute information. However, the inventor finds that multi-channel collection of user data of a single modality is difficult; and the user attribute information is obtained only by means of single-mode user data analysis, so that the problem of single analysis basis exists, and the obtained user attribute information is easy to be inaccurate.
In order to solve the problem, in the embodiment of the application, input information of at least two different modalities of a user is obtained; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. It can be seen that in an input scenario, for example, input information of at least two different modalities of a user is collected; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.
For example, one of the scenarios in the embodiment of the present application may be applied to the scenario shown in fig. 1, where the scenario includes a client 101 and a processor 102, and the client 101 and the processor 102 are in the same artificial intelligence product. The user performs multi-modal input through the client 101, and the processor 102 determines the target user attribute information of the user by using the implementation manner provided by the embodiment of the application, so that the processor 102 performs artificial intelligence service based on the target user attribute information of the user, and the artificial intelligence product is more intelligent.
It is to be understood that, in the above application scenarios, although the actions of the embodiments of the present application are described as being performed by the processor 102, the present application is not limited in terms of the subject of execution as long as the actions disclosed in the embodiments of the present application are performed.
It is to be understood that the above scenario is only one example of a scenario provided in the embodiment of the present application, and the embodiment of the present application is not limited to this scenario.
The following describes in detail a specific implementation manner of the method for determining user attribute information and the related apparatus in the embodiment of the present application with reference to the drawings.
Exemplary method
Referring to fig. 2, a flowchart of a method for determining user attribute information in an embodiment of the present application is shown. In this embodiment, applied to an input scenario, the method may include, for example, the following steps:
step 201: input information of at least two different modalities of a user is obtained.
In the prior art, the method for determining the user attribute information is to collect user data of a single modality through multiple channels, analyze the user data of the single modality to obtain the user attribute information, and because the analysis basis is single, the accuracy of the user attribute information is low. Therefore, in the embodiment of the present application, in consideration of that the user attribute information naturally flows out of the input information of the user in the input scenario, especially in the conversation scenario, the chat scenario, or the instant messaging scenario, it is relatively simple to collect the input information of the user, and in order to improve the accuracy of determining the user attribute information, at first, the input information of at least two different modalities of the user needs to be collected, that is, step 201 is executed.
In the embodiment of the present application, common user input modes include a text input mode, a voice input mode, and an image input mode. The input information corresponding to the text input mode is text input information, the input information corresponding to the voice input mode is voice input information, and the input information corresponding to the image input mode is image input information; therefore, at least two of the above three kinds of input information are arbitrarily obtained when step 201 is executed. That is, in an alternative implementation manner of the embodiment of the present application, the input information of the at least two different modalities includes at least two of text input information, voice input information, and image input information.
As an example of step 201, obtaining input information of at least two different modalities of the user includes text input information "a" of the user, i buy a mountain bike after i test high school, voice input information "a", i buy a mountain bike after i test high school, "and image input information-mountain bike image.
Step 202: and respectively carrying out user attribute identification on the input information of at least two different modes to obtain corresponding at least two user attribute information.
In this embodiment of the application, after the input information of the user in at least two different modalities is obtained in step 201, the user attribute identification mode corresponding to the input information of each modality needs to be utilized to perform user attribute identification on the input information of each modality, so as to obtain the user attribute information corresponding to the input information of the modality, and thus obtain at least two user attribute information corresponding to the input information of at least two different modalities.
The user attribute information may be coarse-grained user attribute information, such as coarse-grained slot information of names, professions, commodities, and the like, and/or coarse-grained category information of ages, professions, family members, income situations, and the like. The user attribute information may also be fine-grained user attribute information, such as fine-grained slot information of street names, city names, school names, company names, and the like, and/or fine-grained feature information of living environments, activity regions, work environments, and the like. Therefore, in an optional implementation manner of the embodiment of the present application, the user attribute information includes coarse-grained user attribute information and/or fine-grained user attribute information.
The input information corresponding to the at least two different modalities includes at least two of text input information, voice input information, and image input information, and in step 202, in a specific embodiment, the specific embodiments corresponding to the three different modalities of the text input information, the voice input information, and the image input information are different, which is specifically as follows:
first, for text input information, when step 202 is executed, first, semantic information related to user attributes, for example, slot position information and intention information related to the user attributes, needs to be extracted from the text input information; and then, realizing the user attribute identification of the semantic information through a preset text user attribute identifier to obtain corresponding user attribute information. Therefore, in an alternative implementation manner of this embodiment of the present application, when the input information of at least two different modalities includes the first text input information, step 202 may include the following steps, for example:
step A: and performing semantic extraction processing on the first text input information to obtain first semantic information related to the user attribute in the first text input information.
And B: and carrying out user attribute identification on the first semantic information by using a preset text user attribute identifier to obtain first user attribute information.
The preset text user attribute recognizer can be a text user attribute classifier obtained by pre-training a preset classification algorithm through text form information samples and user attribute label information, wherein the preset classification algorithm can be, for example, shallow naive Bayes and a support vector machine, and deep multilayer perceptron, a cyclic neural network, a convolutional neural network and the like; the preset text user attribute identifier may also be a user attribute information miner and the like which are set in advance for the text, and is not specifically limited in the embodiment of the present application.
As an example, when the first text input information is "dad, wait for me to buy a mountain bike after me examines high school", the corresponding first user attribute information is coarse-grained category information "student" and fine-grained slot information "mountain bike".
Secondly, for the voice input information, when step 202 is executed, the voice input information is not only required to be converted into text input information, but corresponding user attribute information is obtained by adopting the above steps a to B; and moreover, voice features need to be extracted from the voice input information, and the user attribute recognition of the voice features is realized through a preset voice user attribute recognizer, so that corresponding user attribute information is obtained. Therefore, in an alternative implementation manner of this embodiment of the present application, when the input information of at least two different modalities includes the first speech input information, step 202 may include the following steps, for example:
and C: the first speech input information is converted into second text input information.
Step D: and performing semantic extraction processing on the second text input information to obtain second semantic information related to the user attribute in the second text input information.
Step E: and carrying out user attribute identification on the second semantic information by using a preset text user attribute identifier to obtain second user attribute information.
Step F: and performing voice feature extraction processing based on the first voice input information to obtain corresponding voice features.
Because the quality of the voice input information is influenced by factors such as aliasing, higher harmonic distortion, high frequency and the like brought by the user vocal organ and equipment for collecting the voice input information, if the voice feature extraction processing is directly carried out on the voice input information, the obtained voice feature is not accurate enough; therefore, in order to improve the accuracy of the speech features, it is necessary to perform preprocessing such as pre-emphasis, framing, and windowing on the speech input information before performing the speech feature extraction processing on the speech input information. That is, in an optional implementation manner of the embodiment of the present application, step F may include, for example, the following steps:
step F1: preprocessing the first voice input information to obtain second voice input information;
step F2: and performing voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
Step G: and carrying out user attribute identification on the voice characteristics by utilizing a preset voice user attribute identifier to obtain third user attribute information.
The preset voice user attribute recognizer can be a voice user attribute classifier obtained by pre-training a preset classification algorithm through a voice feature sample and user attribute label information, wherein the preset classification algorithm can be, for example, a support vector machine, a multilayer perceptron, a cyclic neural network, a convolutional neural network, a probabilistic linear discriminant analysis algorithm and the like; the preset voice user attribute recognizer may also be a user attribute information extractor preset for voice, and the like, and is not specifically limited in this embodiment.
It should be noted that, in the embodiment of the present application, the execution order of steps C to E and steps F to G is not limited, and the steps C to E may be executed first and then the steps F to G may be executed, the steps F to G may be executed first and then the steps C to E may be executed, or the steps C to E and the steps F to G may be executed simultaneously.
As an example, when the first voice input information is "dad, and when i buy a mountain bike after he examines high school," the corresponding second user attribute information is coarse-grained type information "student" and fine-grained slot information "mountain bike," and the corresponding third user attribute information is coarse-grained type information "boy" and "teenager.
Thirdly, for the image input information, when step 202 is executed, firstly, the image input information is preprocessed into image information of a uniform format; and then, realizing the user attribute identification of the image information through a preset text user attribute identifier to obtain corresponding user attribute information. Therefore, in an alternative implementation manner of this embodiment of the present application, step 202 may include the following steps:
step H: and preprocessing the image input information to obtain image information in a preset format.
When the image input information may be still image input information, for example, a still picture; the image input information may also be dynamic image input information, such as a dynamic picture, video, or the like. For the input information of the moving image, the input information of the moving image needs to be extracted as the input information of the static image frame by frame in the preprocessing.
Step I: and carrying out user attribute identification on the image information by using a preset image user attribute identifier to obtain fourth user attribute information.
The preset image user attribute identifier may be an image user attribute classifier obtained by training a preset classification algorithm in advance through an image information sample and user attribute label information, and the preset classification algorithm may be, for example, leNet, AlexNet, VGG series model, inclusion series model, ResNet series model, SENet model, and the like in a convolutional neural network; the preset image user attribute identifier may also be a user attribute information extractor preset for an image, and the like, and is not particularly limited in this embodiment of the application.
As an example, the image input information is a mountain bike image, and the corresponding fourth user attribute information is fine-grained feature information "mountain bike".
Step 203: and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user.
In the embodiment of the present application, after the at least two types of user attribute information are obtained in step 202, in order to improve the accuracy of determining the user attribute information, the at least two types of user attribute information need to be comprehensively judged to determine the target user attribute information of the user, so that the problem of a single analysis basis is avoided, and the accuracy of determining the user attribute information is greatly improved.
When step 203 is implemented specifically, the input information of at least two different modalities and the corresponding attribute information of at least two types of user attributes are input into a preset comprehensive judger obtained by pre-training, that is, the probability of each type of user attribute information is output, and the larger the probability is, the higher the possibility that the user attribute information is the target user attribute information is; a lower limit value of the probability for indicating that the user attribute information is determined as the target user attribute information is preset as a preset probability, and when the probability of the certain user attribute information is greater than or equal to the preset probability, the user attribute information can be determined as the target user attribute information. Therefore, in an optional implementation manner of this embodiment of the present application, step 203 may include the following steps, for example:
step J: obtaining the probability of each user attribute information by using a preset comprehensive judger; the preset comprehensive judger is obtained by training a deep learning network in advance based on input information samples in different modes and corresponding user attribute information samples.
The preset comprehensive judger is obtained by fully learning the association between the mining mode input information samples and the corresponding user attribute information samples through a Deep learning network, wherein the Deep learning network can be, for example, a Deep & Wide model, the Wide model is a simple linear model such as a logistic regression model, the Deep model is a Deep learning model, and the Deep & Wide model has both memory capacity and generalization capacity. Of course, the embodiment of the present application does not limit the specific implementation manner of the deep learning network.
Step K: and if the probability of the user attribute information is greater than or equal to the preset probability, determining the user attribute information as the target user attribute information of the user.
In addition, in the embodiment of the present application, on the basis of the above description, evidence for determining target user attribute information from at least two types of user attribute information may also be stored; namely, at least two kinds of user attribute information and the probability of each kind of user attribute information are stored correspondingly. But also can store the target user attribute information of the user; namely, the user and the target user attribute information are correspondingly stored. Therefore, in an optional implementation manner of the embodiment of the present application, after the step J, for example, a step L may be further included: correspondingly storing at least two kinds of user attribute information and the probability of each kind of user attribute information; and/or, after step 203, for example, may further include step M: and correspondingly storing the attribute information of the user and the target user.
In addition, in the embodiment of the present application, steps 201 to 203 may be performed in real time, so that the determined user attribute information is more accurate. Along with the accumulation of time, the input information of the user is gradually increased, and the determined user attribute information is more accurate by utilizing the mode for determining the user attribute information provided by the embodiment of the application.
Through various implementation manners provided by the embodiment, input information of at least two different modalities of a user is obtained; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. Therefore, in the input scene, the collection of the input information of at least two different modes of the user is easier; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.
Based on the above embodiments, taking the input information of different modalities of the user including text input information, voice input information, and image input information as an example, a specific implementation manner of the method for determining user attribute information in the embodiment of the present application is described in detail through the embodiments below.
Referring to fig. 3, a flowchart of another method for determining user attribute information in the embodiment of the present application is shown. In this embodiment, the method may comprise, for example, the steps of:
step 301: text input information, voice input information, and image input information of a user are obtained.
Step 302: and respectively carrying out user attribute identification on the text input information, the voice input information and the image input information to obtain corresponding various user attribute information.
Step 303: and comprehensively judging the attribute information of various users to determine the target user attribute information of the users.
Step 304: and correspondingly storing the attribute information of the user and the target user.
For the relevant descriptions of steps 301 to 304, reference may be made to the relevant descriptions in the above embodiments, which are not described herein again.
Through various implementation manners provided by the embodiment, input information of at least two different modalities of a user is obtained; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. Therefore, in the input scene, the collection of the input information of at least two different modes of the user is easier; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.
Exemplary devices
Referring to fig. 4, a schematic structural diagram of an apparatus for determining user attribute information in an embodiment of the present application is shown. In this embodiment, the apparatus, which is applied to an input scene, may specifically include:
a first obtaining unit 401, configured to obtain input information of at least two different modalities of a user;
a second obtaining unit 402, configured to perform user attribute identification on input information in at least two different modalities respectively, so as to obtain at least two corresponding user attribute information;
the determining unit 403 is configured to perform comprehensive judgment on at least two types of user attribute information, and determine target user attribute information of a user.
In an alternative implementation of the embodiment of the present application, the input information of the at least two different modalities includes at least two of text input information, voice input information, and image input information.
In an optional implementation manner of the embodiment of the present application, when the input information of at least two different modalities includes the first text input information, the second obtaining unit 402 includes:
the first obtaining subunit is used for performing semantic extraction processing on the first text input information to obtain first semantic information related to user attributes in the first text input information;
and the second obtaining subunit is used for carrying out user attribute identification on the first semantic information by using a preset text user attribute identifier to obtain first user attribute information.
In an optional implementation manner of the embodiment of the present application, when the input information of at least two different modalities includes the first speech input information, the second obtaining unit 402 includes:
the conversion subunit is used for converting the first voice input information into second text input information;
the third obtaining subunit is configured to perform semantic extraction processing on the second text input information, and obtain second semantic information related to the user attribute in the second text input information;
a fourth obtaining subunit, configured to perform user attribute identification on the second semantic information by using a preset text user attribute identifier, so as to obtain second user attribute information;
a fifth obtaining subunit, configured to perform speech feature extraction processing based on the first speech input information, so as to obtain a corresponding speech feature;
and the sixth obtaining subunit is configured to perform user attribute recognition on the voice features by using a preset voice user attribute recognizer, and obtain third user attribute information.
In an optional implementation manner of the embodiment of the present application, the fifth obtaining subunit includes:
the first obtaining module is used for preprocessing the first voice input information to obtain second voice input information;
and the second obtaining module is used for carrying out voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
In an optional implementation manner of the embodiment of the present application, when the input information of at least two different modalities includes image input information, the second obtaining unit 402 includes:
the seventh obtaining subunit is configured to pre-process the image input information to obtain image information in a preset format;
and the eighth obtaining subunit is configured to perform user attribute identification on the image information by using a preset image user attribute identifier, and obtain fourth user attribute information.
In an optional implementation manner of the embodiment of the present application, the determining unit 403 includes:
a ninth obtaining subunit, configured to obtain a probability of each user attribute information by using a preset comprehensive judger; the preset comprehensive judger is obtained by pre-training a deep learning network based on input information samples in different modes and corresponding user attribute information samples;
and the determining subunit is used for determining the user attribute information as the target user attribute information of the user if the probability of the user attribute information is greater than or equal to the preset probability.
In an optional implementation manner of the embodiment of the present application, the user attribute information includes coarse-grained user attribute information and/or fine-grained user attribute information.
In an optional implementation manner of the embodiment of the present application, the apparatus further includes:
the first storage unit is used for correspondingly storing at least two kinds of user attribute information and the probability of each kind of user attribute information; and/or the presence of a gas in the gas,
and the second storage unit is used for correspondingly storing the attribute information of the user and the target user.
Through various implementation manners provided by the embodiment, input information of at least two different modalities of a user is obtained; respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information; and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user. Therefore, in the input scene, the collection of the input information of at least two different modes of the user is easier; and when the target user attribute information of the user is determined, the input information of at least two different modes is combined for comprehensive determination, so that the problem of single analysis basis is avoided, and the accuracy of the target user attribute information is greatly improved.
Fig. 5 is a block diagram illustrating an apparatus 500 for determining user attribute information according to an example embodiment. For example, the apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the apparatus 500 may include one or more of the following components: processing component 502, memory 504, power component 506, multimedia component 508, audio component 510, input/output (I/O) interface 512, sensor component 514, and communication component 516.
The processing component 502 generally controls overall operation of the device 500, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operation at the device 500. Examples of such data include instructions for any application or method operating on device 500, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the device 500. The power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 500.
The multimedia component 508 includes a screen that provides an output interface between the device 500 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of the touch or slide action, but also detect the duration and pressure correlated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front-facing camera and/or the rear-facing camera may receive external multimedia data when the device 500 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, audio component 510 includes a Microphone (MIC) configured to receive external audio signals when apparatus 500 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the device 500. For example, the sensor assembly 514 may detect an open/closed state of the device 500, the relative positioning of the components, such as a display and keypad of the apparatus 500, the sensor assembly 514 may also detect a change in position of the apparatus 500 or a component of the apparatus 500, the presence or absence of user contact with the apparatus 500, orientation or acceleration/deceleration of the apparatus 500, and a change in temperature of the apparatus 500. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate communication between the apparatus 500 and other devices in a wired or wireless manner. The apparatus 500 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the apparatus 500 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium having instructions therein, which when executed by a processor of a mobile terminal, enable the mobile terminal to perform a method of determining user attribute information, the method comprising:
acquiring input information of at least two different modalities of a user;
respectively carrying out user attribute identification on input information of at least two different modes to obtain corresponding at least two user attribute information;
and comprehensively judging at least two kinds of user attribute information to determine the target user attribute information of the user.
In an optional implementation manner of the embodiment of the present application, the input information of the at least two different modalities includes at least two of text input information, voice input information, and image input information.
In an optional implementation manner of the embodiment of the present application, when the input information in the at least two different modalities includes first text input information, the performing user attribute identification on the input information in the at least two different modalities respectively to obtain corresponding at least two types of user attribute information includes:
performing semantic extraction processing on the first text input information to obtain first semantic information related to user attributes in the first text input information;
and carrying out user attribute identification on the first semantic information by utilizing a preset text user attribute identifier to obtain first user attribute information.
In an optional implementation manner of the embodiment of the present application, when the input information in the at least two different modalities includes first voice input information, the performing user attribute identification on the input information in the at least two different modalities respectively to obtain corresponding at least two user attribute information includes:
converting the first voice input information into second text input information;
performing semantic extraction processing on the second text input information to obtain second semantic information related to user attributes in the second text input information;
performing user attribute identification on the second semantic information by using a preset text user attribute identifier to obtain second user attribute information;
performing voice feature extraction processing based on the first voice input information to obtain corresponding voice features;
and carrying out user attribute recognition on the voice features by utilizing a preset voice user attribute recognizer to obtain third user attribute information.
In an optional implementation manner of the embodiment of the present application, the performing, based on the first speech input information, speech feature extraction processing to obtain a corresponding speech feature includes:
preprocessing the first voice input information to obtain second voice input information;
and performing voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
In an optional implementation manner of the embodiment of the present application, when the input information of the at least two different modalities includes the image input information, the performing user attribute identification on the input information of the at least two different modalities respectively to obtain corresponding at least two kinds of user attribute information includes:
preprocessing the image input information to obtain image information in a preset format;
and carrying out user attribute identification on the image information by utilizing a preset image user attribute identifier to obtain fourth user attribute information.
In an optional implementation manner of this embodiment of the present application, the comprehensively determining the at least two types of user attribute information to determine the target user attribute information of the user includes:
obtaining the probability of each user attribute information by utilizing the preset comprehensive judger; the preset comprehensive judger is obtained by pre-training a deep learning network based on input information samples in different modes and corresponding user attribute information samples;
and if the probability of the user attribute information is greater than or equal to the preset probability, determining the user attribute information as the target user attribute information of the user.
In an optional implementation manner of the embodiment of the present application, the user attribute information includes coarse-grained user attribute information and/or fine-grained user attribute information.
In an optional implementation manner of the embodiment of the present application, the method further includes:
correspondingly storing the at least two kinds of user attribute information and the probability of each kind of user attribute information; and/or the presence of a gas in the gas,
and correspondingly storing the user and the target user attribute information.
Fig. 6 is a schematic structural diagram of a server in an embodiment of the present application. The server 600 may vary significantly due to configuration or performance, and may include one or more Central Processing Units (CPUs) 622 (e.g., one or more processors) and memory 632, one or more storage media 630 (e.g., one or more mass storage devices) storing applications 642 or data 644. Memory 632 and storage medium 630 may be, among other things, transient or persistent storage. The program stored in the storage medium 630 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 622 may be configured to communicate with the storage medium 630 and execute a series of instruction operations in the storage medium 630 on the server 600.
The server 600 may also include one or more power supplies 626, one or more wired or wireless network interfaces 650, one or more input-output interfaces 658, one or more keyboards 656, and/or one or more operating systems 641, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a preferred embodiment of the present application and is not intended to limit the present application in any way. Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application. Those skilled in the art can now make numerous possible variations and modifications to the disclosed embodiments, or modify equivalent embodiments, using the methods and techniques disclosed above, without departing from the scope of the claimed embodiments. Therefore, any simple modification, equivalent change and modification made to the above embodiments according to the technical essence of the present application still fall within the protection scope of the technical solution of the present application without departing from the content of the technical solution of the present application.

Claims (10)

1. A method for determining user attribute information, applied to an input scene, comprising:
acquiring input information of at least two different modalities of a user;
respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and comprehensively judging the at least two kinds of user attribute information to determine the target user attribute information of the user.
2. The method of claim 1, wherein the input information of the at least two different modalities includes at least two of text input information, speech input information, and image input information.
3. The method according to claim 2, wherein when the input information of the at least two different modalities includes first text input information, the performing user attribute recognition on the input information of the at least two different modalities respectively to obtain corresponding at least two user attribute information comprises:
performing semantic extraction processing on the first text input information to obtain first semantic information related to user attributes in the first text input information;
and carrying out user attribute identification on the first semantic information by utilizing a preset text user attribute identifier to obtain first user attribute information.
4. The method according to claim 2, wherein when the input information of the at least two different modalities includes first voice input information, the performing user attribute recognition on the input information of the at least two different modalities respectively to obtain corresponding at least two user attribute information comprises:
converting the first voice input information into second text input information;
performing semantic extraction processing on the second text input information to obtain second semantic information related to user attributes in the second text input information;
performing user attribute identification on the second semantic information by using a preset text user attribute identifier to obtain second user attribute information;
performing voice feature extraction processing based on the first voice input information to obtain corresponding voice features;
and carrying out user attribute recognition on the voice features by utilizing a preset voice user attribute recognizer to obtain third user attribute information.
5. The method according to claim 4, wherein performing a speech feature extraction process based on the first speech input information to obtain corresponding speech features comprises:
preprocessing the first voice input information to obtain second voice input information;
and performing voice feature extraction processing on the second voice input information to obtain the voice feature of the second voice input information.
6. The method according to claim 2, wherein when the input information of the at least two different modalities includes the image input information, the performing user attribute recognition on the input information of the at least two different modalities respectively to obtain corresponding at least two user attribute information comprises:
preprocessing the image input information to obtain image information in a preset format;
and carrying out user attribute identification on the image information by utilizing a preset image user attribute identifier to obtain fourth user attribute information.
7. The method according to claim 1, wherein said comprehensively determining the at least two types of user attribute information to determine the target user attribute information of the user comprises:
obtaining the probability of each user attribute information by utilizing the preset comprehensive judger; the preset comprehensive judger is obtained by pre-training a deep learning network based on input information samples in different modes and corresponding user attribute information samples;
and if the probability of the user attribute information is greater than or equal to the preset probability, determining the user attribute information as the target user attribute information of the user.
8. An apparatus for determining user attribute information, applied to an input scene, comprising:
the first obtaining unit is used for obtaining input information of at least two different modalities of a user;
the second obtaining unit is used for respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and the determining unit is used for comprehensively judging the at least two kinds of user attribute information and determining the target user attribute information of the user.
9. An apparatus for determining user attribute information, for application to an input scenario, comprising a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured for execution by the one or more processors, the one or more programs comprising instructions for:
acquiring input information of at least two different modalities of a user;
respectively carrying out user attribute identification on the input information of the at least two different modes to obtain corresponding at least two user attribute information;
and comprehensively judging the at least two kinds of user attribute information to determine the target user attribute information of the user.
10. A machine-readable medium having stored thereon instructions, which when executed by one or more processors, cause an apparatus to perform the method of determining user attribute information of any of claims 1 to 7.
CN202110055642.1A 2021-01-15 2021-01-15 Method and related device for determining user attribute information Pending CN112784606A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110055642.1A CN112784606A (en) 2021-01-15 2021-01-15 Method and related device for determining user attribute information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110055642.1A CN112784606A (en) 2021-01-15 2021-01-15 Method and related device for determining user attribute information

Publications (1)

Publication Number Publication Date
CN112784606A true CN112784606A (en) 2021-05-11

Family

ID=75756205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110055642.1A Pending CN112784606A (en) 2021-01-15 2021-01-15 Method and related device for determining user attribute information

Country Status (1)

Country Link
CN (1) CN112784606A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784372A (en) * 2016-08-24 2018-03-09 阿里巴巴集团控股有限公司 Forecasting Methodology, the device and system of destination object attribute
CN111507774A (en) * 2020-04-28 2020-08-07 上海依图网络科技有限公司 Data processing method and device
CN111563551A (en) * 2020-04-30 2020-08-21 支付宝(杭州)信息技术有限公司 Multi-mode information fusion method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784372A (en) * 2016-08-24 2018-03-09 阿里巴巴集团控股有限公司 Forecasting Methodology, the device and system of destination object attribute
CN111507774A (en) * 2020-04-28 2020-08-07 上海依图网络科技有限公司 Data processing method and device
CN111563551A (en) * 2020-04-30 2020-08-21 支付宝(杭州)信息技术有限公司 Multi-mode information fusion method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN107221330B (en) Punctuation adding method and device and punctuation adding device
CN107644646B (en) Voice processing method and device for voice processing
CN108227950B (en) Input method and device
CN109670077B (en) Video recommendation method and device and computer-readable storage medium
CN111539443A (en) Image recognition model training method and device and storage medium
CN109961791B (en) Voice information processing method and device and electronic equipment
CN111210844B (en) Method, device and equipment for determining speech emotion recognition model and storage medium
CN110874145A (en) Input method and device and electronic equipment
CN110991329A (en) Semantic analysis method and device, electronic equipment and storage medium
CN106777016B (en) Method and device for information recommendation based on instant messaging
CN110990534A (en) Data processing method and device and data processing device
CN110764627B (en) Input method and device and electronic equipment
CN111160047A (en) Data processing method and device and data processing device
US11354520B2 (en) Data processing method and apparatus providing translation based on acoustic model, and storage medium
CN112651235A (en) Poetry generation method and related device
CN111739535A (en) Voice recognition method and device and electronic equipment
CN111242205B (en) Image definition detection method, device and storage medium
CN112784151A (en) Method and related device for determining recommendation information
CN111831132A (en) Information recommendation method and device and electronic equipment
CN112818841A (en) Method and related device for recognizing user emotion
CN112784606A (en) Method and related device for determining user attribute information
CN113409766A (en) Recognition method, device for recognition and voice synthesis method
CN109145151B (en) Video emotion classification acquisition method and device
CN110020117B (en) Interest information acquisition method and device and electronic equipment
CN113946228A (en) Statement recommendation method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination