CN108810625A - A kind of control method for playing back of multi-medium data, device and terminal - Google Patents

A kind of control method for playing back of multi-medium data, device and terminal Download PDF

Info

Publication number
CN108810625A
CN108810625A CN201810591842.7A CN201810591842A CN108810625A CN 108810625 A CN108810625 A CN 108810625A CN 201810591842 A CN201810591842 A CN 201810591842A CN 108810625 A CN108810625 A CN 108810625A
Authority
CN
China
Prior art keywords
information
user
category
target
emotion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810591842.7A
Other languages
Chinese (zh)
Inventor
崔兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201810591842.7A priority Critical patent/CN108810625A/en
Publication of CN108810625A publication Critical patent/CN108810625A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Abstract

The embodiment of the invention discloses a kind of control method for playing back of multi-medium data, device and terminal, wherein method includes:Sentiment classification model is called to carry out classification analysis to user information, determine the target emotional category belonging to user object, according to the associated levels characteristic reference information of the target emotional category, identify that the feature for indicating emotion degree indicates information from the user information, and indicate that information determines target strength grade of the user object under the target emotional category according to the feature, it plays in multimedia database and obtains and the associated multi-medium data of target strength grade, it can be finely controlled the broadcasting of multi-medium data according to the emotional category of user and the strength grade to being segmented under emotional category.

Description

A kind of control method for playing back of multi-medium data, device and terminal
Technical field
The present invention relates to field of computer technology more particularly to a kind of control method for playing back of multi-medium data, device and Terminal.
Background technology
Currently, with the rapid development of computer technology, the update speed of intellectual product is constantly accelerated, therefore is It keeps user to purchase intellectual product using viscosity and liveness, needs constantly to be promoted or the work(of developing intellectual resource product Can, to cater to user and the market demand.
How deepening continuously and develop with automatic identification is accurately analyzed the information progress that different user provides To the state of user, and is launched to different users according to the analysis result to user information and mutually agreed with User Status respectively Data information become current research hotspot.
Invention content
An embodiment of the present invention provides a kind of control method for playing back of multi-medium data, device and terminals, can be according to user Emotional category and the strength grade to being segmented under emotional category be finely controlled the broadcasting of multi-medium data.
On the one hand, an embodiment of the present invention provides a kind of control method for playing back of multi-medium data, the method includes:
Obtain the user information of user object;
It calls sentiment classification model to carry out classification analysis to the user information, determines the target belonging to the user object Emotional category;
According to the associated levels characteristic reference information of the target emotional category, identify use from the user information Information is indicated in the feature of expression emotion degree, and indicates that information determines the user object in the target according to the feature Target strength grade under emotional category;
Acquisition and the associated multi-medium data of target strength grade from multimedia database, and play the acquisition Multi-medium data.
On the other hand, an embodiment of the present invention provides a kind of broadcast control device of multi-medium data, described device includes:
Acquiring unit, the user information for obtaining user object;
Determination unit determines the user for calling sentiment classification model to carry out classification analysis to the user information Target emotional category belonging to object;
Recognition unit, for basis and the associated levels characteristic reference information of the target emotional category, from the user Identify that the feature for indicating emotion degree indicates information in information;
The determination unit is additionally operable to indicate that information determines the user object in the target emotion according to the feature Target strength grade under classification;
Broadcast unit is used for acquisition and the associated multi-medium data of target strength grade from multimedia database, And play the multi-medium data of the acquisition.
In another aspect, an embodiment of the present invention provides a kind of terminal, including processor, input equipment, output equipment and deposit Reservoir, the processor, input equipment, output equipment and memory are connected with each other, wherein the memory is calculated for storing Machine program, the computer program include program instruction, and the processor is configured for calling described program instruction, executes such as Lower step:
Obtain the user information of user object;
It calls sentiment classification model to carry out classification analysis to the user information, determines the target belonging to the user object Emotional category;
According to the associated levels characteristic reference information of the target emotional category, identify use from the user information Information is indicated in the feature of expression emotion degree, and indicates that information determines the user object in the target according to the feature Target strength grade under emotional category;
Acquisition and the associated multi-medium data of target strength grade from multimedia database, and play the acquisition Multi-medium data.
In another aspect, an embodiment of the present invention provides a kind of computer storage media, which is stored with Computer program instructions, the computer program instructions are performed the broadcasting controlling party for realizing above-mentioned multi-medium data Method.
The embodiment of the present invention is by calling sentiment classification model to handle the user information of user object, to determine Target emotional category belonging to the user object and the target strength grade under the target emotional category, to can play Multimedia database neutralizes the associated multi-medium data of target strength grade, to more finely accurately control multimedia The broadcasting of data.
Description of the drawings
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field For logical technical staff, without creative efforts, other drawings may also be obtained based on these drawings.
Fig. 1 is a kind of showing for the service system of the control method for playing back application of multi-medium data provided in an embodiment of the present invention Meaning property block diagram;
Fig. 2 is a kind of application scenario diagram of the control method for playing back of multi-medium data provided in an embodiment of the present invention;
Fig. 3 is a kind of flow diagram of the control method for playing back of multi-medium data provided in an embodiment of the present invention;
Fig. 4 is a kind of stream for calling sentiment classification model to carry out classification analysis to user information provided in an embodiment of the present invention Journey schematic diagram;
Fig. 5 is that a kind of calling sentiment classification model provided in an embodiment of the present invention carries out classification analysis to text category information Flow diagram;
Fig. 6 is that a kind of calling sentiment classification model provided in an embodiment of the present invention carries out classification analysis to image category information Flow diagram;
Fig. 7 is a kind of structural representation block diagram of the broadcast control device of multi-medium data provided in an embodiment of the present invention;
Fig. 8 is a kind of structural representation block diagram of terminal provided in an embodiment of the present invention.
Specific implementation mode
In embodiments of the present invention, the broadcasting control of multi-medium data, energy can be completed in service system as shown in Figure 1 Enough user informations arrived based on dynamic acquisition, to classify and determine the strength grade of a subdivision to the emotion of user, by This is accurately that user obtains and plays the multi-medium datas such as the music either video suitable for the current affective state of user.Such as figure Shown in 1, the service system includes users' lateral terminals such as smart mobile phone 101, intelligent sound box 102, wearable intelligent terminal 103 And the server 104 of service side, the smart mobile phone 101, intelligent sound box 102, wearable intelligent terminal 103 can be by having Line wirelessly establishes connection with the server 104, and number of request is sent to realize to the server 104 According to, and the feedback data from the server 104 is can receive, the intelligent sound box 102 also can be whole with wearable intelligence respectively End 103 and smart mobile phone 101 establish connection so that the intelligent sound box 102 can receive the intelligent terminal 101 and it is described can Dress the user information that intelligent terminal 103 uploads.In one embodiment, the wearable intelligent terminal 103 for example can be The equipment such as smartwatch or intelligent earphone.
In one embodiment, the intelligent sound box 102 can obtain the use of the user object of the upload of smart mobile phone 101 first Family information, in one embodiment, the smart mobile phone 101 can pass through the application program for receiving information of its installation (Application, App) (such as instant messaging application, application etc. of taking pictures) obtains user information, and will be in the user information The intelligent sound box 102 is passed to, such as the text message and/or image information that can will be obtained by social software or camera Etc. uploading to intelligent sound box 102.After the intelligent sound box 102 gets the user information, by calling emotional semantic classification mould Type handles the user information to obtain the emotional category belonging to the user object.After emotional category is determined, institute The characteristic information that emotion degree is used to indicate in the user information, and root can further be acquired by stating intelligent sound box 102 Determine strength grade of the user object under affiliated emotional category according to the characteristic information, the intelligent sound box 102 from And can play in the more matchmaker's databases stored in server 104 with the associated multi-medium data of the strength grade.
In one embodiment, the intelligent sound box 102 can obtain user object by its integrated information acquisition module User information, for example, the intelligent sound box 102 integrates on hardware photographing module, microphone etc. with information collection function Module.The user information that the intelligent sound box 102 obtains includes text category information, image category information and user's sign classification information Etc. information rule, which screen the user information, can be determined according to preset priority for collected user information, To determine the final user information handled of the intelligent sound box 102.
In one embodiment, the intelligent sound box 102 is believed user by preconfigured sentiment classification model Breath is analyzed, and is classified to user object.It can train in advance and obtain sentiment classification model, and sentiment classification model is configured Into intelligent sound box 102, the intelligent sound box 102 calls the emotional semantic classification after getting the user information of user object Model classifies to the user information, to can determine that the emotional category of the user object.In one embodiment, The emotional category can be as shown in table 1, is divided into positive emotion and negative sense emotion, the emotional category that the forward direction emotion includes mainly Have glad and surprised, the emotional category that the negative sense emotion includes mainly has detest, fear, sadness and indignation.It is appreciated that It is that table 1 is only as an example.
Table 1
In other embodiments, it can also be in advance based on preset N number of dimension to classify to the emotion of user object, specifically It can be according to PAD (a kind of classification dimension of emotion, wherein P represents pleasure degree, and A represents activity, and D represents dominance) emotion point User feeling is divided into 8 classes by analysis method, including:(+the P+A+D) of happiness, boring (- P-A-D), (+the P+A-D) that relies on, slight Depending on (- P-A+D), loosen (+P-A+D), (- the P+A-D) of anxiety, docile (+P-A-D) and hostility (- P+A+D).
In one embodiment, the sentiment classification model can determine rule to determine user information based on priority Priority.When receiving the user information of diversified forms simultaneously, the sentiment classification model can be determined according to the priority Rule carries out preliminary treatment to user information.The priority determines that rule can be:It sets the priority of heart rate information to Secondly highest priority is voice messaging/text information, is finally video information.Therefore, in the embodiment of the present invention it is preferential root Final classification is obtained according to heart rate information as a result, followed by obtaining final classification according to sound/character information as a result, being finally basis Video information obtains final classification result.Classification results A is determined, according to language according to heart rate information as 2 second row of table describes Justice/text information determines classification results B, determines classification results C according to expression information and is determined point according to limbs information After class result D, final judging result is classification results A, i.e., preferentially using the classification results that heart rate information determines as user object Final emotional category.In obtaining result as shown in 2 the third line of table, due to not getting the classification knot based on heart rate information Therefore fruit is being determined classification results A according to semanteme/text information, classification results B is determined based on expression information and is being based on limbs After information determines classification results B, final judging result is the classification results A based on semanteme/text information determination as user couple The emotional category of elephant, and so on the final classification results of other parts shown in table 2 can be obtained.
Table 2
Semanteme/text results Expression result Limbs result Heart rate result Final result judges
B C D A A
A B B Nothing A
Nothing A B Nothing A
Nothing Nothing A Nothing A
Nothing Nothing Nothing Nothing There is no recognition result
For emotional category, user object corresponding strength grade under any emotional category can be determined again, In, emotional category as shown in Table 1 further can carefully mark multiple strength grades, such as can will have per a kind of emotional category Body carefully divides three layers of strength grade into, wherein strength grade 1 is primary intensity, and strength grade 2 is intermediate intensity, and strength grade 3 is Advanced intensity, for example, the primary intensity under belonging to happiness this emotional category corresponds to a little glad, intermediate intensity corresponds to Very delight, advanced intensity, which corresponds to, to be extremely happy.In the present embodiment, the division of the strength grade of emotional category is not limited Mode.
In one embodiment, the multi-medium data stored in multimedia database has also carried out division and the intensity of classification Division, and the category division of the multi-medium data and intensity divide and above-mentioned user feeling classification and user feeling classification Under strength grade divide it is consistent, to may make the multi-medium data and user's feelings that obtain and play from multimedia database Sense classification and intensity match.The class of each multi-medium data can be marked by adding tagged mode to multi-medium data Other and intensity, also can preserve corresponding multi-medium data by establishing different classes of and intensity data set, be achieved in more The differentiation of the classification and intensity of media data.
When being music data to multi-medium data, category division and intensity division can be as shown in table 3, the music data Classification can correspond to emotional category be divided into it is glad, surprised, be weary of, be frightened, the classifications such as sad and indignation, the intensity of music data It divides the strength grade that can also correspond under emotional category and is divided into strength grade 1 (primary), strength grade 2 (middle rank) and intensity etc. 3 (advanced) of grade.Primary intensity under happy category corresponds to a little happiness music, and intermediate intensity corresponds to very delight sound Happy, advanced intensity corresponds to the music that is extremely happy.Therefore, it when determining user feeling is a little glad, will be played according to table 3 Music a little glad in music data, wherein the multimedia database can be stored in cloud server, can also be deposited Storage does not limit in embodiments of the present invention in the terminal side equipments such as intelligent sound box.
Table 3
Music data Strength grade 1 Strength grade 2 Strength grade 3
Happiness music Primary happiness music collections Intermediate happiness music collections Advanced happiness music collections
Surprised music Primary surprised music collections The surprised music collections of middle rank Advanced surprised music collections
It is weary of music Primary is weary of music collections Middle rank is weary of music collections It is advanced to be weary of music collections
Frightened music Primary fear music collections Intermediate fear music collections Advanced fear music collections
Sad music Primary sadness music collections Intermediate sadness music collections Advanced sadness music collections
Angry music Primary indignation music collections Intermediate indignation music collections Advanced indignation music collections
In one embodiment, the flow for referring to a kind of control method for playing back of multi-medium data as shown in Figure 3 is shown It is intended to, in embodiments of the present invention, the control method for playing back can execute terminal by one and execute, which can be with For an intelligent terminal, such as can be such as the intelligent sound box 102 in Fig. 1, in other embodiments, which is alternatively Other intelligent terminals such as smart mobile phone 101 or wearable intelligent terminal 103 as shown in Figure 1.
In S301, the user information of user object is obtained, according to the collecting device or hardware capability knot actually connected Structure, the user information may include the one or more in text category information, image category information and user's sign category information, Wherein, text category information for example can be semanteme/text information mentioned above, and image category information for example can be mentioned above Expression information, limbs information etc., and user's sign category information can be detected by wearable device 103 and be obtained, such as user couple The information such as heart rate, respiratory rate, the skin temperature of elephant.
In one embodiment, may include when obtaining the user information of user object:It obtains about user object Text category information, the text category information include:The the first text class letter acquired from the first terminal for establish data connection Breath, and/or the second text class letter being converted to according to the voice messaging acquired from the first terminal for establish data connection Breath, in one embodiment, the execution terminal can be from smart mobile phones as shown in Figure 1 and wearable intelligent terminal etc. first eventually The first text category information is obtained in end, in one embodiment, the instant messaging that is arranged on first terminal application, society Hand over the software applications such as application that can acquire the first text category information and be sent to the execution terminal.When what first terminal uploaded is When voice messaging, executing terminal can be by the Chinese language model suitable for being converted to Chinese speech in transformation model Collected voice messaging is converted to the second text category information by (Chinese Language Model, CLM), can also pass through phase Voice messaging is converted into the second text category information like models such as word analysis models.
In one embodiment, can also include when obtaining the user information of user object:From establishing data connection Second terminal in obtain image class data about user object, the second terminal and the smart mobile phone, wearable intelligence The first terminals such as energy terminal can be same terminal or different terminals, and described image category information is, for example, the user The image informations such as the limbs information of object and expression information.Described image category information can be that the execution terminal passes through setting What camera was captured, it can also be that the execution terminal is captured by camera and be sent to the second terminal, may be used also To be the information such as the photo that execution terminal is extracted from the chat record for the social software that the second terminal includes.
In one embodiment, can also include when obtaining the user information of user object:From establishing data connection Third terminal in obtain user's sign category information about user object, the third terminal can be intelligent earphone, intelligence Wrist-watch etc. can acquire the wearable intelligent terminal of the information such as user's heart rate.
It, can be first from the institute got when the execution terminal gets a plurality of user information from user object simultaneously It states and selects target user's information in a plurality of user information, the target user's information selected from a plurality of user information can To include:Based on the data volume of content of text from a plurality of text category information selection target text category information, based on the clear of image Clear degree selection target image category information from a plurality of image category information, is believed based on the variation degree of information from a plurality of user's sign class Selection target user sign category information in breath is based on mesh for example, will change bigger heart rate information as target sign category information In mark text category information, target image category information and target sign category information any one or multiple determine user Information.The target text category information selected, target image category information, target sign category information are believed as final user Breath executes following S302.
In one embodiment, after execution terminal gets the user information of user object, in S302, emotion point is called Class model carries out classification analysis to the user information, determines the target emotional category belonging to the user object, the emotion Disaggregated model is set in advance in the execution terminal, for being analyzed user information so that it is determined that going out the user object Affiliated emotional category.In one embodiment, the execution terminal provides a kind of calling emotional semantic classification mould as shown in Figure 3 Type carries out the user information flow diagram of classifying and analyzing method, specifically includes step S401 and step S402.
Terminal is executed in S401, rule is determined according to priority, determines information included in the user information Priority, the priority determine that rule is to execute terminal to pre-set and be stored in execution terminal.It can be based on acquisition To user's information object the degree of dependence that is divided of emotional category, different classes of user information is carried out excellent First grade division can also include the validity feature of information based on the user information to carry out priority division.
In one embodiment, the priority orders that rule determines can be determined with preset priority levels:It will be according to heart rate information The priority that equal users' sign category information determines to obtain user object emotional category is set as highest, is secondly voice messaging/text The texts category information such as word information is finally various image category informations.
In one embodiment, the user information includes that the validity feature of information refers to:The user information includes Specified keyword, specified characteristics of image or specified sign feature etc..When the user information includes text class letter When breath, the user information includes that the validity feature of information refers to:The text category information includes described for determining The keyword of user object said target emotional category, for example, glad, cheerful and light-hearted, excellent etc..When the user information includes When image category information, the user information includes that the validity feature of information refers to:For determining in described image category information The characteristics of image of the user object said target emotional category, such as the amplitude that raises up of the corners of the mouth or the radian etc. of eyes bending Characteristics of image;In one embodiment, it when the user information includes user's sign category information, is wrapped in the user information The validity feature for including information refers to:For determining the target emotion belonging to the user object in user's sign category information The sign feature of classification, such as heart rates, the signs feature such as skin temperature.And if text category information, image category information and/or When user's sign category information does not include validity feature, priority is minimum or even can directly delete and be not used for classifying Analysis.And if text category information, image category information and/or user's sign category information include validity feature, it can be based on The quantity of validity feature determines priority, and validity feature is more, and priority is higher, select to be input to emotion based on priority Disaggregated model is to carry out the user information of subsequent classification analysis.For example, certain collected user information includes text category information With image category information, wherein text category information includes 3 specified keywords, and in image category information includes then 2 specified Characteristics of image, can classification analysis preferentially be carried out based on text category information at this time.If cannot be accurate based on text category information The target emotional category belonging to the user object is really analyzed, then can reselect image category information and be input to emotional semantic classification To carry out the classification analysis of user object in model.
When the user information that the execution terminal receives includes two and two or more information, institute can be first determined The priority of two or more information is stated, and calls sentiment classification model to preferential in two or more described information The highest information of grade is handled, and while reducing the processing load for executing terminal, also ensures determining user object The accuracy of the target emotional category.
In one embodiment, terminal is executed after the priority for determining information that the user information includes, In S402, sentiment classification model is called to carry out classification analysis, the emotion point to the user information according to determining priority Class model can pass through machine learning methods such as support vector machines (Support Vector Machine, SVM) or maximum entropy etc. Training obtains, and the method for establishing the sentiment classification model is by ontologies, semantic network and mood in text message The modeling to text concept weighed and realized is expressed, to may recognize that user emotion more delicate in user information Expression, in one embodiment, the sentiment classification model can be by way of carrying out emotion word to calculate scoring to according to excellent The user information of first grade selection carries out classification analysis.
It is referred to again such as Fig. 5, is that a kind of calling sentiment classification model of the embodiment of the present invention divides text category information The flow diagram of alanysis, the terminal that executes are determining the user information for calling sentiment classification model processing according to priority When the information for including is text category information, the execution terminal is in S501, the text class that includes to the user information Information carry out word segmentation processing, obtain emotion set of words, in one embodiment, execute terminal can by extract keyword (or close Key word) method to the text category information carry out word segmentation processing, the keyword be and the user object emotional category phase Emotion word of pass, such as accident, wedding, earthquake etc., the set that the keyword extracted from the text category information is formed For the emotion set of words, in one embodiment, bag of words (Bag of words also can be used in the execution terminal Model) or participle tool carries out word segmentation processing to text category information, obtains the emotion set of words.
In one embodiment, after obtaining the emotion set of words, the execution terminal calls emotion point in S502 Class model calculates the emotion word in the emotion set of words, determines the emotion scoring of the user information, and according to institute State the target emotional category belonging to the determining user object of emotion scoring, wherein the sentiment classification model is to the emotion Emotion word in set of words carry out calculate be based in the execution terminal preset emotion word score data library carry out, institute It states and stores the corresponding feelings of each emotion word in multiple emotion words and the multiple emotion word in emotion word score data library Feel score value, therefore, the terminal that executes can be according to, to the emotion score value of emotion word setting, calculating is simultaneously true in the emotion set of words The emotion scoring to the user information is made, so as to further determine the user object according to emotion scoring Target emotional category.In one embodiment, it is determined to user information according to the result of calculation in the execution terminal Emotion scoring after, can the emotion be scored and be executed the preset different emotions classification of terminal it is corresponding scoring (or score Section) it is matched, and the emotional category of successful match is determined as to the target emotional category of the user object.
In one embodiment, described each emotion word executed in terminal preset emotion word score data library corresponds to Emotion score value for example can be:Accident:Passive degree 0.7 (- 0.7), wedding:Actively 0.9 (+0.9) of degree, earthquake:Passive degree 0.8 (- 0.8), if it is " held after shake first of earthquake areas to execute the text message about user object A that terminal obtains Wedding ", then it is (earthquake, wedding) to carry out the emotion set of words obtained after word segmentation processing to the text message, due to executing end It is respectively+0.9 and -0.8 to hold preset earthquake and the corresponding emotion word score value of wedding, then executes terminal and call emotional semantic classification The emotion scoring that the emotion word is calculated to the text message in model is+0.1, if it is assumed that it is pre- to execute terminal If emotional category be scoring section corresponding to happiness be+0.1 -+0.3, sad corresponding scoring section is-0.1-- 0.3 Etc., it is known that, the emotion scoring of the text message and emotional category are that glad corresponding scoring section matches, then according to institute The target emotional category for stating the user object A of text message determination is happiness.
The terminal that executes is determining the letter for calling the user information of sentiment classification model processing to include according to priority When breath is image category information, reference can be made to a kind of calling sentiment classification model as shown in FIG. 6 carries out classification point to image category information The flow diagram of analysis, the execution terminal carry out human face region detection and crucial point location in S601, to image category information, Face-image shape is obtained, then after obtaining the face-image shape, in S602, the execution terminal is by the face Image shape carries out shape with object reference classification shape and is aligned, and obtains the rigid body normalization shape of face-image, wherein described Object reference classification shape is recorded in shape database and corresponding with Image emotional semantic classification, and is wrapped in the shape database Multiple reference category shapes are included, each reference category shape both corresponds to an Image emotional semantic classification, and institute is obtained executing terminal After the rigid body normalization shape for stating face-image, in S603, rigid body of the sentiment classification model to the face-image is called It normalizes shape and carries out Expression Recognition, obtain expression class prediction as a result, in one embodiment, the expression class prediction result It can be multiple possible Image emotional semantic classifications, subsequently determine final institute from multiple possible Image emotional semantic classification again State the target emotional category belonging to user object.The execution terminal then can be according to the prediction result in S604, to the table Feelings class prediction result is weighted ballot, determines the target emotional category belonging to the user object.
The terminal that executes is determining the letter for calling the user information of sentiment classification model processing to include according to priority When breath is user's sign category information, if user's sign category information is the heart rate information of the user object, end is executed End can determine the user object by reference to the method for carrying out extract real-time and classification based on the heart rate value of heart rate impulse signal Affiliated target emotional category.
In one embodiment, the execution terminal is after determining the target emotional category belonging to user object, In S303, according to the associated levels characteristic reference information of the target emotional category, identify use from the user information In the feature instruction information for indicating emotion degree, and in S304, indicate that information determines the user object according to the feature Target strength grade under the target emotional category, wherein the execution terminal has been preset extremely under target emotional category Few two strength grades, it is described to refer to the associated levels characteristic reference information of target emotional category:In the execution terminal Information pre-stored, for describing the different intensity grades under target emotional category.The levels characteristic reference information master Including:The keyword (or keyword) for describing the different brackets intensity under target class emotional category, for example, describes target feelings Feel the degree adverb of different intensity grades under classification, such as:A bit, slightly, very and very etc..The feature indicates that information is Refer to:It executes terminal and extracts (or identification) from user information, and for determining under user object said target emotional category Target strength grade information.
In one embodiment, the execution terminal is after determining the target emotional category belonging to user object, can be from Identification obtains feature instruction information in user information, and is determined and feature instruction information from levels characteristic reference information Consistent or matching degree meets the goal gradient feature reference information of predetermined threshold value, then the goal gradient feature reference information refers to The strength grade of the target emotional category shown is target strength grade of the user object under the target emotional category.
In one embodiment, by taking emotional category shown in table 1 is refined as three layers of strength grade as an example, the execution terminal The pre-stored levels characteristic reference information for describing primary intensity may include:A little, slightly with it is a little etc., for retouching The levels characteristic reference information for stating intermediate intensity may include:Very, especially and very etc., for describe advanced strength grade etc. Grade feature reference information may include:Very much with extremely etc..
In one embodiment, it executes terminal and is determining target strength of the user object under the target emotional category After grade, in S305, acquisition and the associated multi-medium data of target strength grade from multimedia database, and play The multi-medium data, the multimedia database can be stored in advance in the execution terminal, can also be stored in advance in and It is described to execute in terminal connected cloud server or other intelligent terminals, the described and associated multimedia of target strength grade Data include:What is stored in multimedia database is provided with more matchmakers with the associated grade label of target strength grade Volume data, the grade label execution terminal or server are added when being stored to the multi-medium data, are executed Terminal or server can be that the multi-medium data adds label, the multimedia number according to the characteristic information of multi-medium data According to characteristic information be, for example, the melody of audio form multi-medium data and the keyword etc. of textual form multi-medium data.
In one embodiment, described to further include with the associated multi-medium data of target strength grade:In multi-medium data Stored in library with the multi-medium data in the associated multimedia collections of target strength grade, the multimedia collections are roots What record was formed is played to the history of multi-medium data according to user, the multi-medium data belonged in same multimedia collections is corresponding Classification and strength grade are identical, in one embodiment, can be real by broadcasting and the associated multi-medium data of target strength grade Now to the reinforcement of positive emotion, or the alleviation to negative sense emotion.
In one embodiment, terminal is executed to obtain and play associated with target strength grade from multimedia database The method of multi-medium data specifically includes:It is determined and the associated multimedia number of the target strength grade from multimedia database According to, and select one or more multi-medium data to play out from determining multi-medium data, in the multi-medium data The multi-medium data of the multi-medium data and visual form of multi-medium data, textual form including audio form.It is held when described The user information that row terminal is got is that " today, I passed through driving license test, excellent!First cheerful and light-hearted song is put to me!" when, it adjusts It is (passing through, excellent) to carry out the emotion set of words that word segmentation processing obtains to the text message with sentiment classification model, then right After emotion word in the emotion set of words is calculated, the mesh of the user object is determined according to obtained emotion word scoring It is happiness to mark emotional category, then is determined in the text message for indicating that the feature instruction information of emotion degree includes so good And it is cheerful and light-hearted, then indicate that information can determine that the emotion of user is the very delight (intensity etc. under happiness emotion according to the feature Grade is middle rank), will then the music corresponded in multimedia database in very delight collection of music be played, or play more matchmakers The music of very delight label is added in volume data library.
In further embodiment, executes terminal and identifying the feature instruction for indicating emotion degree from user information After information, the affective state grade of user object can be also determined according to the feature instruction information identified, and according to the feelings Sense state grade selects and obtains popularization object information, to can play the selected popularization object information, wherein described to push away Extensively object information includes:The information of target product to be promoted, and/or, the information of destination service to be promoted.
It in one embodiment, can after calling sentiment classification model to determine the target emotional category belonging to user object Product (object information) is carried out based on psychology Distance Theory binding characteristic instruction information to promote, the psychology Distance Theory will The emotional category of user object is known as " high level " when belonging to positive emotion and explains field, and the emotional category of user object is belonged to It is known as " low-level " when negative sense emotion and explains field, therefore, is explained when the affective state grade of user object is in " high level " When field, according to psychology Distance Theory, product that user object preference is abstracted, that distance is remote, time-consuming, therefore, at this time Terminal promote product for example can be house ornamentation style rather than specific house ornamentation group material or can be European Tour set meal rather than Swim travel package etc. in periphery.When the affective state grade of user object be in " low-level " explain field when, according to psychology away from From theory, user object prefer to specific, distance it is close, take short product, therefore, the product that terminal is promoted at this time can be with It is specific house ornamentation group material or periphery trip travel package etc..
In embodiments of the present invention, terminal is executed by calling sentiment classification model to carry out the user information of user object Processing, to determine the target emotional category belonging to the user object and the target strength etc. under the target emotional category Grade neutralizes the associated multi-medium data of target strength grade to can play multimedia database, realizes more fine Accurately control the broadcasting of multi-medium data.
Description based on above method embodiment, in one embodiment, the embodiment of the present invention additionally provide a kind of such as Fig. 7 Shown in multi-medium data broadcast control device structural representation block diagram.As shown in fig. 7, more matchmakers in the embodiment of the present invention The broadcast control device of volume data, including:Acquiring unit 701, determination unit 702, recognition unit 703, broadcast unit 704, In the embodiment of the present invention, the broadcast control device of the multi-medium data, which can be arranged, to be needed to play progress to multi-medium data In the equipment of control.
In one embodiment, acquiring unit 701, the user information for obtaining user object;Determination unit 702 is used In calling sentiment classification model to carry out classification analysis to the user information, the target emotion class belonging to the user object is determined Not;Recognition unit 703 is believed for basis and the associated levels characteristic reference information of the target emotional category from the user Identify that the feature for indicating emotion degree indicates that information, the determination unit 702 are additionally operable to refer to according to the feature in breath Show that information determines target strength grade of the user object under the target emotional category;Broadcast unit 704 is used for from more Acquisition and the associated multi-medium data of target strength grade in media database, and play the multimedia number of the acquisition According to.
In one embodiment, at least two strength grades, the broadcast unit are provided under the target emotional category 704, for from multimedia database obtain with the associated multi-medium data of target strength grade, and play described in obtain When the multi-medium data taken, for being determined and the associated multi-medium data of target strength grade from multimedia database; One or more multi-medium data is selected from determining multi-medium data, and plays the multi-medium data of selection;At one In embodiment, refer to the associated multi-medium data of target strength grade:What is stored in multimedia database is set Multi-medium data with the associated grade label of target strength grade, alternatively, stored in multimedia database with Multi-medium data in the associated multimedia collections of target strength grade.
In one embodiment, the acquiring unit 701 can be used specifically in the user information for obtaining user object In execute following steps any one or it is multiple:For obtaining the text category information about user object, the text class Information includes:The first text category information for being acquired from the first terminal for establish data connection, and/or according to from establishing number The second text category information being converted to according to the voice messaging acquired in the first terminal of connection;From establishing the of data connection The image category information about user object is obtained in two terminals;It is obtained about user from the third terminal for establish data connection User's sign category information of object.
In one embodiment, the determination unit 702 for call sentiment classification model to the user information into When row classification analysis, for determining rule according to priority, the priority of information included in the user information, institute are determined Stating information included in user information refers to:Any one in text category information, image category information, user's sign category information Or it is a variety of;Sentiment classification model is called to carry out classification analysis to the user information according to determining priority;Implement at one In example, the priority determines that rule includes:It is determined according to the validity feature of included information in the user information preferential The rule of grade determines the rule of priority according to the priority for information setting different classes of in the user information.
In one embodiment, information included in the user information includes:Text category information, the determination unit 702, for calling sentiment classification model to carry out classification analysis to the user information, determine the mesh belonging to the user object When marking emotional category, the text category information for including to the user information carries out word segmentation processing, obtains emotion set of words; It calls sentiment classification model to calculate the emotion word in the emotion set of words, determines that the emotion of the user information is commented Point, and the target emotional category belonging to the determining user object of emotion scoring;In one embodiment, it sets in advance It is equipped with emotion word score data library, multiple emotion words are had recorded in the database, and has recorded the emotion point of each emotion word Value, the sentiment classification model is counted to the emotion word in the emotion set of words based on emotion word score data library It calculates.
In one embodiment, information included in the user information includes:Image category information, the determination unit 702, for calling sentiment classification model to carry out classification analysis to the user information, determine the mesh belonging to the user object When marking emotional category, for carrying out human face region detection and crucial point location to described image category information, face-image shape is obtained Shape;The face-image shape is carried out shape with object reference classification shape to be aligned, obtains the rigid body normalization of face-image Shape, in one embodiment, the object reference classification shape be recorded in shape database and with Image emotional semantic classification pair It answers, and the shape database includes multiple reference category shapes, each reference category shape both corresponds to an image Emotional category;It calls sentiment classification model to carry out Expression Recognition to the rigid body normalization shape of the face-image, obtains expression Class prediction result;Ballot is weighted to the expression class prediction result, determines the target feelings belonging to the user object Feel classification.
In one embodiment, the determination unit 702 is additionally operable to, according to the feature instruction information identified, determine institute State the affective state grade of user object;The broadcast unit 704 is additionally operable to according to the affective state hierarchical selection and obtains Object information is promoted, and plays selected popularization object information;The popularization object information includes:Target product to be promoted Information, and/or, the information of destination service to be promoted.
In embodiments of the present invention, determination unit 702 is by calling user information of the sentiment classification model to user object It is handled, to determine that the target emotional category belonging to the user object and the target under the target emotional category are strong Grade is spent, the associated multi-medium data of target strength grade is neutralized to which broadcast unit 704 can play multimedia database, Realize the more fine broadcasting for accurately controlling multi-medium data.
Fig. 8 is referred to, is a kind of structural representation block diagram of terminal provided in an embodiment of the present invention, sheet as shown in Figure 8 Terminal in embodiment may include:One or more processors 801;One or more input equipments 802, one or more output Equipment 803 and memory 804.Above-mentioned processor 801, input equipment 802, output equipment 803 and memory 804 pass through bus 805 connections.Memory 804 is for storing computer program, and the computer program includes program instruction, and processor 801 is used for Execute the program instruction of the storage of the memory 804.
The memory 804 may include volatile memory (volatile memory), such as random access memory (random-access memory, RAM);Memory 804 can also include nonvolatile memory (non-volatile Memory), such as flash memory (flash memory), solid state disk (solid-state drive, SSD) etc.;Memory 804 can also include the combination of the memory of mentioned kind.
The processor 801 can be central processing unit (central processing unit, CPU).The processor 801 can further include hardware chip.Above-mentioned hardware chip can be application-specific integrated circuit (application- Specific integrated circuit, ASIC), programmable logic device (programmable logic device, PLD) etc..The PLD can be field programmable gate array (field-programmable gate array, FPGA), lead to With array logic (generic array logic, GAL) etc..The combination of the processor 801 or above structure.
In the embodiment of the present invention, for the memory 804 for storing computer program, the computer program includes program Instruction, processor 801 is used to execute the program instruction of the storage of memory 804, for realizing the correlation method in above-described embodiment The step of.
In one embodiment, the processor 801 is configured to call described program instruction, for obtaining user object User information;It calls sentiment classification model to carry out classification analysis to the user information, determines the mesh belonging to the user object Mark emotional category;According to the associated levels characteristic reference information of the target emotional category, identified from the user information Go out the feature for indicating emotion degree and indicate information, and indicates that information determines the user object described according to the feature Target strength grade under target emotional category;It is obtained and the associated more matchmakers of the target strength grade from multimedia database Volume data, and play the multi-medium data of the acquisition.
In one embodiment, at least two strength grades, the processor 801 are provided under the target emotional category For obtained from multimedia database with the associated multi-medium data of target strength grade, and play the acquisition When multi-medium data, for being determined and the associated multi-medium data of target strength grade from multimedia database;From true One or more multi-medium data is selected in fixed multi-medium data, and plays the multi-medium data of selection;Wherein, with it is described The associated multi-medium data of target strength grade refers to:What is stored in multimedia database is provided with and the target strength The multi-medium data of the associated grade label of grade, alternatively, being stored in multimedia database with the target strength grade Multi-medium data in associated multimedia collections.
In one embodiment, the processor 801 is in the user information for obtaining user object, specifically for holding Row following steps any one or it is multiple:The text category information about user object is obtained, the text category information includes: The first text category information for being acquired from the first terminal for establish data connection, and/or according to from establishing data connection The second text category information that the voice messaging acquired in first terminal is converted to;From the second terminal for establishing data connection Obtain the image category information about user object;The use about user object is obtained from the third terminal for establish data connection Family sign category information.
In one embodiment, the processor 801, for calling sentiment classification model to carry out the user information It is regular for being determined according to priority when classification analysis, determine the priority of information included in the user information, it is described Included information refers in user information:In text category information, image category information, user's sign category information any one or It is a variety of;Sentiment classification model is called to carry out classification analysis to the user information according to determining priority;Wherein, described preferential Grade determines that rule includes:The rule or root of priority are determined according to the validity feature of included information in the user information The rule of priority is determined according to the priority for information setting different classes of in the user information.
In one embodiment, information included in the user information includes:Text category information, the processor 801, for calling sentiment classification model to carry out classification analysis to the user information, determine the mesh belonging to the user object When marking emotional category, the text category information for including to the user information carries out word segmentation processing, obtains emotion set of words; It calls sentiment classification model to calculate the emotion word in the emotion set of words, determines that the emotion of the user information is commented Point, and the target emotional category belonging to the determining user object of emotion scoring;Wherein, it is previously provided with emotion word Score data library has recorded multiple emotion words in the database, and has recorded the emotion score value of each emotion word, the emotion Disaggregated model is calculated the emotion word in the emotion set of words based on emotion word score data library.
In one embodiment, information included in the user information includes:Image category information, the processor 801, for calling sentiment classification model to carry out classification analysis to the user information, determine the mesh belonging to the user object When marking emotional category, for carrying out human face region detection and crucial point location to described image category information, face-image shape is obtained Shape;The face-image shape is carried out shape with object reference classification shape to be aligned, obtains the rigid body normalization of face-image Shape, wherein the object reference classification shape is recorded in shape database and corresponding with Image emotional semantic classification and described Shape database includes multiple reference category shapes, and each reference category shape both corresponds to an Image emotional semantic classification;It adjusts Expression Recognition is carried out to the rigid body normalization shape of the face-image with sentiment classification model, obtains expression class prediction knot Fruit;Ballot is weighted to the expression class prediction result, determines the target emotional category belonging to the user object.
In one embodiment, the processor 801 is additionally operable to, according to the feature instruction information identified, determine the use The affective state grade of family object;According to the affective state hierarchical selection and popularization object information is obtained, and selected by broadcasting Popularization object information;The popularization object information includes:The information of target product to be promoted, and/or, target to be promoted The information of service.
One of ordinary skill in the art will appreciate that realizing all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a computer read/write memory medium In, the program is when being executed, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage medium can be magnetic Dish, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
Above disclosed is only section Example of the present invention, cannot limit the right model of the present invention with this certainly It encloses, therefore equivalent changes made in accordance with the claims of the present invention, is still within the scope of the present invention.

Claims (10)

1. a kind of control method for playing back of multi-medium data, which is characterized in that including:
Obtain the user information of user object;
It calls sentiment classification model to carry out classification analysis to the user information, determines the target emotion belonging to the user object Classification;
According to the associated levels characteristic reference information of the target emotional category, identified for table from the user information Show the feature instruction information of emotion degree, and indicates that information determines the user object in the target emotion according to the feature Target strength grade under classification;
Acquisition and the associated multi-medium data of target strength grade from multimedia database, and play the more of the acquisition Media data.
2. according to the method described in claim 1, it is characterized in that, being provided at least two intensity under the target emotional category Grade, the acquisition from multimedia database and the associated multi-medium data of target strength grade, and obtained described in broadcasting The multi-medium data taken, including:
It is determined and the associated multi-medium data of target strength grade from multimedia database;
One or more multi-medium data is selected from determining multi-medium data, and plays the multi-medium data of selection;
Wherein, refer to the associated multi-medium data of target strength grade:What is stored in multimedia database is set Multi-medium data with the associated grade label of target strength grade, alternatively, stored in multimedia database with Multi-medium data in the associated multimedia collections of target strength grade.
3. the method as described in claim 1, which is characterized in that the user information for obtaining user object, including following step Any one rapid is multiple:
The text category information about user object is obtained, the text category information includes:It is whole from establish data connection first The first text category information for being acquired in end, and/or according to the voice messaging acquired from the first terminal for establish data connection The second text category information being converted to;
The image category information about user object is obtained from the second terminal for establish data connection;
User's sign category information about user object is obtained from the third terminal for establish data connection.
4. the method as described in claim 1, which is characterized in that the calling sentiment classification model carries out the user information Classification analysis, including:
Rule is determined according to priority, determines in the user information priority of included information, in the user information Included information refers to:Any one or more in text category information, image category information, user's sign category information;
Sentiment classification model is called to carry out classification analysis to the user information according to determining priority;
Wherein, the priority determines that rule includes:It is determined according to the validity feature of included information in the user information The rule of priority determines the rule of priority according to the priority for information setting different classes of in the user information Then.
5. the method as described in claim 1, which is characterized in that included information includes in the user information:Text class Information, the calling sentiment classification model carry out classification analysis to the user information, determine the mesh belonging to the user object Emotional category is marked, including:
Word segmentation processing is carried out to the text category information that the user information includes, obtains emotion set of words;
It calls sentiment classification model to calculate the emotion word in the emotion set of words, determines the emotion of the user information Scoring, and the target emotional category belonging to the determining user object of emotion scoring;
Wherein, it is previously provided with emotion word score data library, has recorded multiple emotion words in the database, and have recorded each The emotion score value of emotion word, the sentiment classification model are based on emotion word score data library in the emotion set of words Emotion word calculated.
6. the method as described in claim 1, which is characterized in that included information includes in the user information:Image class Information, the calling sentiment classification model carry out classification analysis to the user information, determine the mesh belonging to the user object Emotional category is marked, including:
Human face region detection and crucial point location are carried out to described image category information, obtain face-image shape;
The face-image shape is carried out shape with object reference classification shape to be aligned, obtains the rigid body normalization of face-image Shape, wherein the object reference classification shape is recorded in shape database and corresponding with Image emotional semantic classification and described Shape database includes multiple reference category shapes, and each reference category shape both corresponds to an Image emotional semantic classification;
It calls sentiment classification model to carry out Expression Recognition to the rigid body normalization shape of the face-image, it is pre- to obtain expression classification Survey result;
Ballot is weighted to the expression class prediction result, determines the target emotional category belonging to the user object.
7. method as claimed in any one of claims 1 to 6, which is characterized in that identify use from the user information described After indicating the feature instruction information of emotion degree, further include:
Information is indicated according to the feature identified, determines the affective state grade of the user object;
According to the affective state hierarchical selection and popularization object information is obtained, and plays selected popularization object information;
The popularization object information includes:The information of target product to be promoted, and/or, the information of destination service to be promoted.
8. a kind of broadcast control device of multi-medium data, which is characterized in that including:
Acquiring unit, the user information for obtaining user object;
Determination unit determines the user object for calling sentiment classification model to carry out classification analysis to the user information Affiliated target emotional category;
Recognition unit, for basis and the associated levels characteristic reference information of the target emotional category, from the user information In identify for indicate emotion degree feature indicate information;
The determination unit is additionally operable to indicate that information determines the user object in the target emotional category according to the feature Under target strength grade;
Broadcast unit for the acquisition from multimedia database and the associated multi-medium data of target strength grade, and is broadcast Put the multi-medium data of the acquisition.
9. a kind of terminal, which is characterized in that the processor, defeated including processor, input equipment, output equipment and memory Enter equipment, output equipment and memory to be connected with each other, wherein the memory is for storing computer program, the computer Program includes program instruction, and the processor is configured for calling described program instruction, executes such as any one of claim 1-7 The method.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence, the computer program include program instruction, and described program instruction makes the processor execute such as when being executed by a processor Claim 1-7 any one of them methods.
CN201810591842.7A 2018-06-07 2018-06-07 A kind of control method for playing back of multi-medium data, device and terminal Pending CN108810625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810591842.7A CN108810625A (en) 2018-06-07 2018-06-07 A kind of control method for playing back of multi-medium data, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810591842.7A CN108810625A (en) 2018-06-07 2018-06-07 A kind of control method for playing back of multi-medium data, device and terminal

Publications (1)

Publication Number Publication Date
CN108810625A true CN108810625A (en) 2018-11-13

Family

ID=64088149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810591842.7A Pending CN108810625A (en) 2018-06-07 2018-06-07 A kind of control method for playing back of multi-medium data, device and terminal

Country Status (1)

Country Link
CN (1) CN108810625A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN111339348A (en) * 2018-12-19 2020-06-26 北京京东尚科信息技术有限公司 Information service method, device and system
CN111353052A (en) * 2020-02-17 2020-06-30 北京达佳互联信息技术有限公司 Multimedia object recommendation method and device, electronic equipment and storage medium
CN111639208A (en) * 2020-04-30 2020-09-08 维沃移动通信有限公司 Animation display method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104598644A (en) * 2015-02-12 2015-05-06 腾讯科技(深圳)有限公司 User fond label mining method and device
CN104836720A (en) * 2014-02-12 2015-08-12 北京三星通信技术研究有限公司 Method for performing information recommendation in interactive communication, and device
CN105574478A (en) * 2015-05-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Information processing method and apparatus
CN106528859A (en) * 2016-11-30 2017-03-22 英华达(南京)科技有限公司 Data pushing system and method
CN106878809A (en) * 2017-02-15 2017-06-20 腾讯科技(深圳)有限公司 A kind of video collection method, player method, device, terminal and system
US20180049688A1 (en) * 2013-08-12 2018-02-22 The Nielsen Company (Us), Llc Methods and apparatus to identify a mood of media

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180049688A1 (en) * 2013-08-12 2018-02-22 The Nielsen Company (Us), Llc Methods and apparatus to identify a mood of media
CN104836720A (en) * 2014-02-12 2015-08-12 北京三星通信技术研究有限公司 Method for performing information recommendation in interactive communication, and device
CN104598644A (en) * 2015-02-12 2015-05-06 腾讯科技(深圳)有限公司 User fond label mining method and device
CN105574478A (en) * 2015-05-28 2016-05-11 宇龙计算机通信科技(深圳)有限公司 Information processing method and apparatus
CN106528859A (en) * 2016-11-30 2017-03-22 英华达(南京)科技有限公司 Data pushing system and method
CN106878809A (en) * 2017-02-15 2017-06-20 腾讯科技(深圳)有限公司 A kind of video collection method, player method, device, terminal and system

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339348A (en) * 2018-12-19 2020-06-26 北京京东尚科信息技术有限公司 Information service method, device and system
CN110225398A (en) * 2019-05-28 2019-09-10 腾讯科技(深圳)有限公司 Multimedia object playback method, device and equipment and computer storage medium
CN110225398B (en) * 2019-05-28 2022-08-02 腾讯科技(深圳)有限公司 Multimedia object playing method, device and equipment and computer storage medium
CN111353052A (en) * 2020-02-17 2020-06-30 北京达佳互联信息技术有限公司 Multimedia object recommendation method and device, electronic equipment and storage medium
CN111353052B (en) * 2020-02-17 2023-11-21 北京达佳互联信息技术有限公司 Multimedia object recommendation method and device, electronic equipment and storage medium
CN111639208A (en) * 2020-04-30 2020-09-08 维沃移动通信有限公司 Animation display method and device

Similar Documents

Publication Publication Date Title
KR102071582B1 (en) Method and apparatus for classifying a class to which a sentence belongs by using deep neural network
CN106548773B (en) Child user searching method and device based on artificial intelligence
CN110209844B (en) Multimedia data matching method, device and storage medium
CN108810625A (en) A kind of control method for playing back of multi-medium data, device and terminal
WO2019100319A1 (en) Providing a response in a session
CN110364146B (en) Speech recognition method, speech recognition device, speech recognition apparatus, and storage medium
CN107357875A (en) A kind of voice search method, device and electronic equipment
CN106570496A (en) Emotion recognition method and device and intelligent interaction method and device
CN106649694A (en) Method and device for identifying user's intention in voice interaction
US10777199B2 (en) Information processing system, and information processing method
CN109271493A (en) A kind of language text processing method, device and storage medium
CN110532379B (en) Electronic information recommendation method based on LSTM (least Square TM) user comment sentiment analysis
CN108197282A (en) Sorting technique, device and the terminal of file data, server, storage medium
CN110444229A (en) Communication service method, device, computer equipment and storage medium based on speech recognition
CN110580516B (en) Interaction method and device based on intelligent robot
CN109992781B (en) Text feature processing method and device and storage medium
CN109783624A (en) Answer generation method, device and the intelligent conversational system in knowledge based library
WO2021218432A1 (en) Method and apparatus for interpreting picture book, electronic device and smart robot
CN109278051A (en) Exchange method and system based on intelligent robot
CN113392331A (en) Text processing method and equipment
CN110807323A (en) Emotion vector generation method and device
CN110795544A (en) Content search method, device, equipment and storage medium
CN109727091A (en) Products Show method, apparatus, medium and server based on dialogue robot
JP2019046476A (en) Emotion interaction method and robot system based on humor recognition
CN113573128A (en) Audio processing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113