CN111258435A - Multimedia resource commenting method and device, electronic equipment and storage medium - Google Patents

Multimedia resource commenting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111258435A
CN111258435A CN202010044367.9A CN202010044367A CN111258435A CN 111258435 A CN111258435 A CN 111258435A CN 202010044367 A CN202010044367 A CN 202010044367A CN 111258435 A CN111258435 A CN 111258435A
Authority
CN
China
Prior art keywords
expression
target
data
resource
multimedia resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010044367.9A
Other languages
Chinese (zh)
Inventor
艾书明
刘付家
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Reach Best Technology Co Ltd
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Reach Best Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Reach Best Technology Co Ltd filed Critical Reach Best Technology Co Ltd
Priority to CN202010044367.9A priority Critical patent/CN111258435A/en
Publication of CN111258435A publication Critical patent/CN111258435A/en
Priority to US17/123,507 priority patent/US11394675B2/en
Priority to EP21151561.4A priority patent/EP3852044A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/02Input arrangements using manually operated switches, e.g. using keyboards or dials
    • G06F3/023Arrangements for converting discrete items of information into a coded form, e.g. arrangements for interpreting keyboard generated codes as alphanumeric codes, operand codes or instruction codes
    • G06F3/0233Character input methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/01Social networking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/07User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail characterised by the inclusion of specific contents
    • H04L51/10Multimedia information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/54Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/55Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/686Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title or artist information, time, location or usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L51/00User-to-user messaging in packet-switching networks, transmitted according to store-and-forward or real-time protocols, e.g. e-mail
    • H04L51/04Real-time or near real-time messaging, e.g. instant messaging [IM]
    • H04L51/046Interoperability with other network applications or services

Abstract

The application provides a multimedia resource commenting method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: responding to comment triggering operation of a user account on a target multimedia resource, and acquiring target expression packet data corresponding to the target multimedia resource; and displaying the target expression packet data, so that the user account can select the target expression data from the target expression packet data to comment on the target multimedia resource. According to the method and the device, under the condition that the comment triggering operation implemented on the target multimedia resource by the user account is responded, the corresponding target expression packet data are displayed for the target multimedia resource, so that the user can select the expression data and comment the target multimedia resource, the tedious operations such as user searching or searching are not needed, the expression data more suitable for commenting the multimedia resource can be quickly provided for the user, and the comment experience of the user is improved.

Description

Multimedia resource commenting method and device, electronic equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to a method and a device for commenting multimedia resources, electronic equipment and a storage medium.
Background
In internet social contact, emoticons are used as the most direct and active social contact form among social contact objects, and have become information transmission carriers at the same level as characters, pictures, voice and videos.
In the related art, when a user views a multimedia resource, the user comments on the content of the multimedia resource. The method specifically comprises the following steps: the user can comment on the content of the multimedia resource through characters or simple sketches in a comment window of the multimedia resource.
However, in the current scheme, the multimedia resource is only commented through the text or the pictograph, so that the content in the comment window is too monotonous, and the interaction volume between the user and the multimedia resource is extremely low.
Disclosure of Invention
The embodiment of the application provides a method and a device for commenting multimedia resources, electronic equipment and a storage medium, and aims to solve the problem that in the related technology, the interaction between a user and the multimedia resources is low in aggressiveness due to the fact that the multimedia resources are commented only through characters or sketches.
In a first aspect, an embodiment of the present application provides a method for commenting a multimedia resource, where the method includes:
responding to comment triggering operation of a user account on a target multimedia resource, and acquiring target expression packet data corresponding to the target multimedia resource;
and displaying the target expression packet data, allowing the user account to select target expression data from the target expression packet data, and commenting the target multimedia resources.
Optionally, the step of obtaining the target expression package data corresponding to the target multimedia resource includes:
acquiring the resource type of the target multimedia resource;
and selecting corresponding target expression packet data according to the resource type of the target multimedia resource.
Optionally, the step of selecting the corresponding target expression package data according to the resource type of the target multimedia resource includes:
determining a target expression package category corresponding to the resource type of the target multimedia resource from a preset corresponding relation list, wherein the corresponding relation list comprises the corresponding relation between the resource type and the expression package category;
and selecting target expression packet data corresponding to the target expression packet type from a preset expression packet database, wherein each expression packet data in the expression packet database has a corresponding expression packet type.
Optionally, the step of selecting the target expression package data corresponding to the target expression package category from a preset expression package database includes:
selecting all expression packet data corresponding to the target expression packet type from the expression packet database as candidate expression packet data;
selecting candidate expression package data with the use times larger than or equal to a first preset threshold value from the candidate expression package data as the target expression package data according to a first historical use record of the candidate expression package data, wherein the first historical use record comprises the use times of the user account on each candidate expression package data.
Optionally, the method further includes:
acquiring current user representation data of the user account and other user representation data of other user accounts except the current user, wherein the current user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the user account, and the other user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the other user accounts;
determining other user accounts corresponding to other user portrait data with the similarity between the current user portrait data and the preset similarity threshold value or more as similar user accounts of the user accounts;
selecting candidate expression package data with the use times larger than or equal to a second preset threshold value from the candidate expression package data according to a second historical use record of the candidate expression package data, wherein the second historical use record comprises the use times of the similar user account on each candidate expression package data.
Optionally, the method further includes:
determining a hot resource type corresponding to each expression package data according to the third history use record of each expression package data in the expression package data base; the popular resource type is a resource type of a multimedia resource of which the corresponding expression package data is used for the most comment times; the third history usage record includes the number of times the expression package data is used for comments;
and under the condition that the resource type corresponding to the expression package data is not consistent with the hot resource type in the corresponding relation list, replacing the hot resource type with the resource type corresponding to the expression package data in the corresponding relation list.
Optionally, the method further includes:
extracting an expression packet label of each expression packet data in the expression packet database to obtain an expression packet label set;
semantic clustering is carried out on the expression packet label set, expression packet classes and corresponding relations between different expression packet labels and the expression packet classes are determined, and the expression packet classes of each expression packet data in the expression packet database are correspondingly obtained.
Optionally, the step of obtaining the resource type of the target multimedia resource includes:
identifying each resource image frame included in the target multimedia resource, and determining a target object presented in the target multimedia resource and an association relation between the target objects;
determining the resource type of the multimedia resource according to the object attribute of the target object, the association relation between the target objects and the resource attribute information of the multimedia resource; the object attribute at least comprises an object type or an object name, and the resource attribute information at least comprises one of a resource title or a resource description of the multimedia resource.
Optionally, the step of obtaining the resource type of the target multimedia resource includes:
extracting resource attribute information of the target multimedia resource, wherein the resource attribute information at least comprises one of a resource title or a resource description of the multimedia resource;
analyzing the resource information to obtain a corresponding resource label;
and selecting a candidate resource type with the semantic similarity greater than a similarity threshold value from a preset candidate resource type set according to the resource label as the resource type of the target multimedia resource.
Optionally, the step of obtaining the resource type of the target multimedia resource includes:
extracting a target description text of the target multimedia resource, wherein the target description text comprises one or more of a content expression text and a title of the target multimedia resource;
extracting the participles of the target description text, and taking the participles as description labels;
obtaining semantic similarity between the description label and the preset classification category;
and determining the classification category with the semantic similarity between the classification category and the description label being greater than or equal to a preset similarity threshold as the resource type of the target multimedia resource.
In a second aspect, an embodiment of the present application provides an apparatus for commenting a multimedia resource, where the apparatus includes:
the obtaining module is configured to respond to comment triggering operation of a user account on a target multimedia resource and obtain target expression packet data corresponding to the target multimedia resource;
and the display module is configured to display the target expression package data, so that the user account can select the target expression data from the target expression package data to comment on the target multimedia resource.
In a third aspect, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and executable on the processor, and when executed by the processor, the computer program implements the steps of the method for commenting multimedia resources provided in the present application.
In a fourth aspect, the present application further provides a storage medium, where instructions executed by a processor of an electronic device enable the electronic device to perform the steps of the method for commenting on multimedia resources as provided in the present application.
In a fifth aspect, the present application further provides an application program, where the application program, when executed by a processor of an electronic device, implements the steps of the method for commenting a multimedia resource as provided in the present application.
In the embodiment of the application, in response to a comment triggering operation of a user account on a target multimedia resource, target expression packet data corresponding to the target multimedia resource is acquired; and displaying the target expression packet data, so that the user account can select the target expression data from the target expression packet data to comment on the target multimedia resource. According to the method and the device, under the condition that the comment triggering operation implemented on the target multimedia resource by the user account is responded, the corresponding target expression packet data are displayed for the target multimedia resource, so that the user can select the expression data and comment the target multimedia resource, the tedious operations such as user searching or searching are not needed, the expression data more suitable for commenting the multimedia resource can be quickly provided for the user, and the comment experience of the user is improved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating steps of a method for commenting on a multimedia resource according to an embodiment of the present disclosure;
FIG. 2 is an interface diagram of a method for reviewing a multimedia asset according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating steps of a method for commenting on another multimedia resource according to an embodiment of the present disclosure;
FIG. 4 is a block diagram of a comment apparatus for a multimedia resource according to an embodiment of the present application;
FIG. 5 is a logical block diagram of an electronic device of one embodiment of the present application;
fig. 6 is a logic block diagram of an electronic device according to another embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Fig. 1 is a flowchart of steps of a method for commenting a multimedia resource according to an embodiment of the present application, where as shown in fig. 1, the method may include:
step 101, responding to a comment triggering operation of a user account on a target multimedia resource, and acquiring target expression package data corresponding to the target multimedia resource.
In this embodiment of the application, the target multimedia resource may be a video file to be played in a browser playing interface, the target multimedia resource may be from a remote service server, or may also be a video file stored locally, and in addition, the target multimedia resource may also be a multimedia file in other formats, such as an audio file, an image file, and the like.
Specifically, the user account may be an account where the user is currently logged in, and the user may perform a comment operation on the target multimedia resource through the user account.
For example, referring to fig. 2, an interface diagram of a comment method for multimedia resources provided in the embodiment of the present application is shown, and for a video with a topic of "eating and drinking of hometown food", when entering a playing interface 10 for playing the video, a user may click a "comment" button in the playing interface 10 to trigger a comment operation performed on a target multimedia resource, so that the user may generate a comment content in a comment area 20 generated after the trigger, and comment on the video content is implemented.
In the step, after a comment triggering operation performed on the target multimedia resource in response to the user account is performed, target expression packet data corresponding to the target multimedia resource can be acquired, and by acquiring the target expression packet data corresponding to the target multimedia resource, an expression packet matched with the content of the target multimedia resource can be selectively selected for selection and use by the user, so that the pertinence of multimedia resource comments is improved.
Specifically, the target expression package data corresponding to the target multimedia resource may be expression package data associated with the target multimedia resource or expression package data with a correspondence relationship with the target multimedia resource pre-established.
In addition, the expression package data may be pre-stored in the terminal device, or the terminal device may send an expression package data acquisition request to the corresponding service server, so as to acquire the expression package data in a response of the service server to the expression package data acquisition request.
For example, the video website may classify videos and expression package data in advance, classify videos with all contents being for travel play into a travel video category, classify expression package data with all contents being for landscape play into a travel expression package category, and when a comment triggering operation performed on a travel video in response to a user account is performed, may further determine the travel expression package data as target expression package data corresponding to the travel video.
And 102, displaying the target expression package data, allowing the user account to select target expression data from the target expression package data, and commenting the target multimedia resources.
In the step, under the condition that the target expression packet data corresponding to the target multimedia resource is obtained, the target expression packet data can be displayed for a user to select the target expression data for use, so that the multimedia resource is more interesting, and meanwhile, the comment diversity of the multimedia resource is enriched through the expression packet matched with the content of the target multimedia resource.
For example, referring to fig. 2, when entering a play interface 10 for playing a video, if a user triggers a comment operation, the user may comment on video content in a comment area 20 in the play interface 10, where the user may input text comment content in a text input box 21, or may select an expression package for comment in an expression package display area 22 displayed on the play interface 10, an expression package of a gourmet category corresponding to a video category of the video is displayed in an expression package display area 202, and the user may select corresponding expression package content in the expression package display area 22 to implement comment on the video.
In summary, the method for commenting on multimedia resources provided by the embodiment of the application includes: responding to comment triggering operation of a user account on a target multimedia resource, and acquiring target expression packet data corresponding to the target multimedia resource; and displaying the target expression packet data, so that the user account can select the target expression data from the target expression packet data to comment on the target multimedia resource. According to the method and the device, under the condition that the comment triggering operation implemented on the target multimedia resource by the user account is responded, the corresponding target expression packet data are displayed for the target multimedia resource, so that the user can select the expression data and comment the target multimedia resource, the tedious operations such as user searching or searching are not needed, the expression data more suitable for commenting the multimedia resource can be quickly provided for the user, and the comment experience of the user is improved.
Fig. 3 is a flowchart of steps of another method for commenting on a multimedia resource provided by an embodiment of the present application, and as shown in fig. 3, the method may include:
step 201, responding to a comment triggering operation implemented by a user account on a target multimedia resource, and acquiring a resource type of the target multimedia resource.
In the embodiment of the application, the resource type of the target multimedia resource may be used to reflect the category of the target multimedia resource, for example, the content is a video eaten by a person, and the resource type may be determined to be "eat and broadcast". The contents are videos played by people in a tourism way, and the resource type of the videos can be determined to be 'tourism'.
Specifically, the description text or title of the target multimedia resource may be analyzed to determine the resource type of the target multimedia resource. The content of successive frames of the target multimedia asset may also be analyzed to determine the asset type of the target multimedia asset.
Optionally, in a specific implementation manner of the embodiment of the present application, step 201 may specifically include:
the sub-step 2011 of identifying each resource image frame included in the target multimedia resource, and determining a target object presented in the target multimedia resource and an association relationship between the target objects.
In the embodiment of the application, each resource image frame included in the target multimedia resource can be identified through a deep learning algorithm model, so that a target object presented in the target multimedia resource and an association relationship between the target objects are determined. The deep learning algorithm model is based on a deep learning algorithm and can be used for realizing an image identification technology and a feature classification technology based on resource image frames. The embodiment of the application can specifically adopt a convolutional neural network algorithm in a deep learning algorithm to analyze and classify the resource image frame of the target multimedia resource, and output the target object presented in the target multimedia resource and the association relationship between the target objects in real time.
The deep learning model can be continuously trained and learned in practical application, the parameter precision of the model is improved, the deep learning model has strong adaptivity and automatic updating and iteration capacity, and the processing efficiency is improved.
Substep 2012, determining the resource type of the multimedia resource according to the object attribute of the target object, the association relationship between the target objects and the resource attribute information of the multimedia resource.
The object attribute at least comprises an object type or an object name, and the resource attribute information at least comprises one of a resource title or a resource description of the multimedia resource.
Specifically, the essence of the target multimedia resource is obtained by combining a plurality of continuous resource image frames, each resource image frame can be regarded as an image file, and the image features corresponding to each resource image frame can be extracted and obtained based on a feature extraction algorithm of a neural network. The feature is corresponding characteristics or characteristics of a certain class of objects different from other classes of objects or a set of the characteristics and the characteristics, the feature is data which can be extracted through measurement or processing, the main purpose of feature extraction is dimension reduction, and the main idea is to project an original image sample to a low-dimensional feature space to obtain low-dimensional image sample features which can reflect the essence of the image sample or distinguish the image sample.
Through the analysis of the image characteristics corresponding to the resource image frames, the target object and the object attribute of the target object included in each resource image frame can be further identified.
In practical application, the image features may be processed through a Region candidate network (RPN), so as to generate a category Region included in the image features of each resource image frame. Through the region candidate network obtained through mature training, the class region where the target object in each image feature is located can be accurately determined, and the video classification efficiency is improved.
In the embodiment of the application, the convolutional neural network model and the classifier mature in the prior art can realize the classification of the image features. For example, a person image may have a category area where a person target object is located in a screen in an image feature, and the category area is input into a classifier, so that an object attribute of the person target object may be obtained as a person, and therefore, the category area where the target object is located in the image feature may be further classified by the classifier, and an object attribute corresponding to the category area is determined, where the object attribute may include, but is not limited to: humans, food, landscapes, clothing, animals, etc.
For example, if a video frame contains people eating a bowl, the image features of the video frame are subjected to region candidate network processing to obtain regions where people, bowls and food are located in the video frame, and then the regions are input into a classifier separately to obtain the object attributes contained in the video frame as people, bowls and food.
In the embodiment of the application, the complete behavior attribute in the continuous multi-frame resource image frames can be obtained by performing behavior analysis on the incidence relation between the target objects in the continuous multi-frame resource image frames, and the resource category of the target multimedia can be determined according to the complete behavior attribute.
Specifically, a Recurrent Neural Network (RNN) may be used to perform behavior analysis on the association relationship between target objects in the continuous multi-frame resource image frames, so as to obtain the behavior attribute of the target multimedia resource. Specifically, a Long Short-Term Memory network (LSTM) may be used to perform time-series kinematic behavior relationship analysis between image features of consecutive frame images, so as to obtain behavior attributes of the target multimedia resource.
For example, a complete jumping motion has four motion flows of knee bending, jumping, falling and landing, each motion flow can be implemented based on a fixed motion posture, assuming that a video includes four frames of pictures, and determining the category characteristics of each frame of picture, wherein the image characteristics of the four frames of pictures are sorted according to the video time sequence, the image characteristic of the first frame of picture may be a character object attribute, and the motion posture of the character object attribute is knee bending; the image characteristic of the second frame picture can be the attribute of a person object, and the action posture of the attribute of the person object is take-off; the image characteristic of the third frame picture can be the attribute of the person object, and the action posture of the attribute of the person object is falling; the image feature of the fourth frame picture may be a human object attribute whose motion posture is a touchdown. Because the video contains a complete jump action behavior attribute, the resource category of the video can be determined as a category related to the jump motion of the object, such as motion, dance and the like, according to the object attribute of the target object, the behavior attribute obtained by the association relationship between the target objects, and the resource title or resource description of the multimedia resource.
For example, several continuous frames of video frames can be extracted from a target multimedia resource with the content of gourmet broadcast, and a deep learning algorithm is used to determine that core object objects such as people, bowls, food and the like exist in the frames of the several frames of video frames. And then, by identifying the motion characteristic change of the object in the continuous frames of video frames, the opening and closing motion of the lips of the character object can be obtained, and the resource category of the target multimedia resource can be determined to be eating, broadcasting and food by matching with food and bowls.
For another example, several consecutive frames of video frames may be extracted from a target multimedia resource whose content is a travel record, and a deep learning algorithm may be used to determine that core object objects such as people, scenes, vehicles, and the like all exist in the frames of the several frames of video frames. And then, by identifying the action characteristic change of the object in the continuous frames of video frames, the movement change of the combination of the character object and the vehicle and the change of the scenery can be obtained, so that the resource type of the target multimedia resource can be determined to be tourism.
For another example, for a live work of a character, the identified core object includes the character, clothing and performing props (such as a microphone, a guitar, a piano, and the like), the facial image of the character is further identified, the image is compared with the image in the preset star sample library, the character identification information is determined to belong to the stars in the star library, and the resource category of the target multimedia resource is determined to be the performance by further combining the clothing and the performing props.
Optionally, in another specific implementation manner of the embodiment of the present application, step 201 may specifically include:
sub-step 2013, extracting resource attribute information of the target multimedia resource, wherein the resource attribute information at least comprises one of a resource title or a resource description of the multimedia resource.
In another specific implementation manner of the embodiment of the present application, after the target multimedia resource is generated, a worker generally performs content calibration on the target multimedia resource, that is, a title or a resource description is added to the target multimedia resource according to the content of the target multimedia resource.
And a substep 2014 of analyzing the resource information to obtain a corresponding resource label.
In this step, the resource information is analyzed, which may be natural semantic understanding of the resource information, so that a result obtained by the natural semantic understanding operation is determined as a resource tag corresponding to the resource information.
For example, the resource title of a video of a sports topic is: "football kid", the resources are described as: "telling a story that a paucity child in the grotto becomes a football giant star" with sports talent, through semantic understanding of resource title and resource description, resource tags can be obtained including: "football", "inspiring", "sports".
Sub-step 2015, selecting a candidate resource type with semantic similarity greater than a similarity threshold with the resource label from a preset candidate resource type set according to the resource label as the resource type of the target multimedia resource.
In this embodiment of the application, a candidate resource type set may be pre-established, so that after a resource tag corresponding to resource information is obtained, semantic similarity calculation is performed on the resource tag corresponding to the resource information and a candidate resource type in the candidate resource type set, so that a candidate resource type with semantic similarity greater than a similarity threshold with the resource tag is used as the resource type of the target multimedia resource.
For example, in connection with the example in step 2014 above, assume that the set of candidate resource types includes: three candidate resource types, "eat broadcast", "travel", "sports", can label the resource: the semantic similarity calculation is carried out on the 'football', 'inspiring', 'sports' and each candidate resource type in the candidate resource type set, so that the semantic similarity between the candidate resource type 'sports' and the resource label 'sports' is determined to be the maximum, and the 'sports' is determined to be the resource type of the multimedia resource.
In conclusion, by performing fast semantic understanding analysis processing and semantic similarity calculation processing on the target description text of the target multimedia resource, the resource type of the target multimedia resource can be rapidly determined and obtained directly through the analysis result, and the processing efficiency is improved.
Optionally, in another specific implementation manner of the embodiment of the present application, step 201 may specifically include:
sub-step 2016, extracting a target description text of the target multimedia resource, wherein the target description text comprises one or more of a content expression text and a title of the target multimedia resource.
Specifically, this step may refer to the substep 2013, which is not described herein again.
And a substep 2017 of extracting the participle of the target description text and taking the participle as a description label.
In this step, a word segmentation process may be performed on the target description text to obtain a plurality of words, and the words are used as description tags, for example, the resource title of the video of the sports topic is: "football kid", the resources are described as: "telling a story that a paucity child in the impoverished with sports talent becomes a football giant star", by segmenting the resource title and resource description, the segmentation can be obtained to include: football, inspiring, sports, etc.
And a substep 2018 of obtaining semantic similarity between the description label and the preset classification category.
And a substep 2019 of determining the classification category with the semantic similarity between the classification category and the description label being greater than or equal to a preset similarity threshold as the resource type of the target multimedia resource.
In this embodiment of the application, the classification category may be pre-established, so that after the resource tag corresponding to the resource information is obtained, the resource tag corresponding to the resource information and the classification category are subjected to similarity threshold calculation, and thus the classification category having semantic similarity greater than the similarity threshold with the resource tag is used as the resource type of the target multimedia resource.
In conclusion, by performing fast word segmentation processing and semantic similarity calculation processing on the target description text of the target multimedia resource, the resource type of the target multimedia resource can be rapidly determined and obtained directly through the analysis result, and the processing efficiency is improved.
Step 202, selecting corresponding target expression packet data according to the resource type of the target multimedia resource.
In the embodiment of the application, the corresponding relationship between the expression package data and the resource types can be established in advance. And selecting corresponding target expression packet data for the target multimedia resource according to the corresponding relation.
For example, the video website may classify videos and expression package data in advance, classify videos with all contents being for travel play into a "travel" video category, classify expression package data with all contents being for landscape play into a "travel" expression package category, and when a comment triggering operation performed on the "travel" video in response to a user account, may further determine the "travel" type expression package data as target expression package data corresponding to the "travel" type video.
Optionally, in a specific implementation manner of the embodiment of the present application, step 202 may specifically include:
in the substep 2021, a target expression package category corresponding to the resource type of the target multimedia resource is determined from a preset corresponding relationship list, where the corresponding relationship list includes a corresponding relationship between the resource type and the expression package category.
In the embodiment of the application, a corresponding relation list including a corresponding relation between the resource type and the emotion bag type may be established in advance, and a target emotion bag type corresponding to the resource type of the target multimedia resource is determined from the corresponding relation list.
For example, the video website may classify videos and expression package data in advance, classify videos with all contents being for travel play into a "travel" video category, classify expression package data with all contents being for landscape play into a "travel" expression package category, establish a correspondence between the "travel" video category and the "travel" expression package category, and import the correspondence into a correspondence list, and when a target multimedia resource is a video in the "travel" video category, may determine the "travel" expression package category as a target expression package category.
And a substep 2022 of selecting target expression package data corresponding to the target expression package type from a preset expression package data base, wherein each expression package data included in the expression package data base has a corresponding expression package type.
In this step, the target expression package data corresponding to the target expression package type may be selected from the expression package database according to the target expression package type. For example, referring to the example provided in sub-step 2022, the facial expression package database may store "travel" facial expression package categories and facial expression package data corresponding to the "travel" facial expression package categories, and according to the target facial expression package category "travel", all target facial expression package data in the category may be obtained.
By pre-establishing the corresponding relation between the resource type and the emotion bag type, the determination and the acquisition of the target emotion bag data can be completed quickly and unsupervised through a classification algorithm in practical application, and the processing efficiency is improved.
Optionally, in a specific implementation manner of the embodiment of the present application, the sub-step 2022 may specifically include:
substep 20221, selecting all expression package data corresponding to the target expression package type from the expression package data base as candidate expression package data;
substep 20222, selecting candidate expression package data with the usage frequency greater than or equal to a first preset threshold value from the candidate expression package data according to the first historical usage record of the candidate expression package data as the target expression package data.
Wherein the first historical usage record includes a number of uses of each of the candidate emoji package data by the user account.
In the embodiment of the application, a first historical usage record comprising the usage times of a user account for each candidate expression package data can be acquired from a storage database of a terminal or a storage database of a service server, the usage times of the user account for each candidate expression package data are counted from the first historical usage record, the candidate expression package data with the usage times larger than or equal to a first preset threshold value are used as the target expression package data, the candidate expression package data with the usage times larger than or equal to the first preset threshold value can be used as the expression package data commonly used by the user account, so that the purpose of providing the expression package data conforming to the historical usage habits of the user according to the historical usage habits of the user is achieved, and the user experience is improved.
Optionally, in another specific implementation manner of the embodiment of the present application, the sub-step 2022 may further include:
substep 20223, obtaining current user profile data for the user account and other user profile data for user accounts other than the current user.
The current user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the user account, and the other user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the other user account.
In this step, a request may be made to the service server to obtain user portrait data of all user accounts, where the user portrait is also called a user role, and is an effective tool for outlining a target user and associating user appeal with a design direction, and the user portrait is widely applied in various fields.
In the actual operation, the attribute and behavior of the user may be associated with expectations, for example, information such as age, sex, region, occupation tag, interest tag, etc. filled in by the user registration account, browsing history information generated by the user in the application, etc., and by collecting the information, user image data for the user may be created. Taking a video application as an example, if the frequency of watching action films in the video application by a certain user is higher, an 'action' label is added to user portrait data corresponding to the user, and when personalized recommendation is performed on the user later, other videos with the 'action' label can be recommended to the user according to the 'action' label in the user portrait data, so that the recommendation effect is better and more accurate.
If the user image data is stored locally, the user image data can be directly extracted locally.
Substep 20224, determining the other user accounts corresponding to the other user portrait data with the similarity between the current user portrait data being greater than or equal to the preset similarity threshold as the similar user accounts of the user accounts.
Specifically, the content of the user portrait data may be text content, so that text similarity calculation may be performed on the current user portrait data and other user portrait data, and another user account corresponding to another user portrait data having a similarity greater than or equal to a preset similarity threshold value between the current user portrait data and the other user portrait data is determined as a similar user account of the user account.
The similar user account of the user account can be understood as the associated user of the user account, and in the recommendation service, the user habits of the similar user account are associated with the user account of the current user, so that the recommendation range and recommendation richness of the recommendation service can be greatly improved.
Substep 20225, selecting candidate expression package data with the usage frequency greater than or equal to a second preset threshold value from the candidate expression package data according to a second historical usage record of the candidate expression package data as the target expression package data.
Wherein the second historical usage record includes a number of times each of the candidate emoji package data was used by the similar user accounts.
In this step, a second historical usage record including the usage times of the similar user account for each candidate expression package data may be obtained from a storage database of the terminal or a storage database of the service server, the usage times of the similar user account for each candidate expression package data may be counted from the second historical usage record, the candidate expression package data whose usage times are greater than or equal to a second preset threshold value may be used as the target expression package data, and the candidate expression package data whose usage times are greater than or equal to the second preset threshold value may be used as expression package data commonly used by the similar user account, so as to achieve the purpose of providing wider expression package data for the current user according to the historical usage habits of the similar user for the expression package data, and improve user experience.
Optionally, in another specific implementation manner of the embodiment of the present application, step 202 may further include:
and a substep 2023 of determining the trending resource type corresponding to each expression package data according to the third history use record of each expression package data in the expression package data base.
The popular resource type is a resource type of a multimedia resource of which the corresponding expression package data is used for the most comment times; the third history usage record includes the number of times the expression package data is used for comments.
In this step, the third history usage record may count the number of times that each emotion packet data is used for commenting, and if one type of hotspot multimedia resources 1 in the third history usage record is commented by the a emotion bag 100 times, one type of hotspot multimedia resources 2 is commented by the B emotion bag 1000 times, and one type of hotspot multimedia resources 3 is commented by the C emotion bag 700 times, the resource type of the hotspot multimedia resource 1 corresponding to the a emotion bag with the largest number of times of comment used for commenting is determined as the popular resource type X.
Substep 2024, in the correspondence list, replacing the resource type corresponding to the expression package data in the correspondence list with the hit resource type when the resource type corresponding to the expression package data is inconsistent with the hit resource type.
In this step, referring to the example in sub-step 2023, it is assumed that the resource type corresponding to the expression package category corresponding to the expression package a is recorded in the current correspondence list as Y, and at this time, it can be considered that the correspondence stored in the resource type does not meet the actual application condition, so that the resource type corresponding to the expression package a in the original correspondence list as Y can be modified into the hot resource type X, and in addition, if the correspondence list does not include the correspondence between the expression package a and the resource type, the correspondence between the expression package a and the hot resource type X can be newly established.
By carrying out real-time statistics on the comment times of the comment of the situation packet data in the historical records, the corresponding relation list can be updated in real time, so that the timeliness of the corresponding relation included in the list is improved, and the data accuracy of the corresponding relation list is improved.
Optionally, in another specific implementation manner of the embodiment of the present application, step 202 may further include:
and a substep 2025 of extracting the emotion packet label of each emotion packet data in the emotion packet data base to obtain an emotion packet label set.
In the embodiment of the application, after each piece of expression package data in the expression package data base is generated, generally, a worker performs content calibration on the expression package data, that is, an expression package label is added to a target multimedia resource according to the content of the expression package data. The emoji label may include: an emoticon style label, an emoticon theme label, and the like.
Therefore, an expression package label set can be obtained by extracting the expression package label of each expression package data in the expression package database.
Substep 2026, performing semantic clustering on the expression packet label set, determining expression packet types and corresponding relations between different expression packet labels and the expression packet types, and correspondingly obtaining the expression packet type of each expression packet data in the expression packet database.
In this step, semantic clustering is performed on the expression bag label set, so that a correspondence between each expression bag label and an expression bag category can be determined.
Specifically, the semantic clustering process may include: according to the expression packet labels, clustering the expression packet data in a semantic clustering algorithm, namely clustering expression packet labels with similar semantics into a cluster so as to obtain expression packet categories corresponding to the expression packet data, wherein each cluster is an expression packet category and comprises a plurality of expression packet data belonging to the expression packet category.
For example, the semantic clustering algorithm can classify the emoticons such as eating and drinking actions and food patterns into food categories. The facial expression packages such as landscapes, airplanes and yachts are divided into travel categories.
Because the semantic clustering algorithm can be realized under the unsupervised condition, the purpose of obtaining the corresponding relation between the expression bag labels and the expression bag categories by the semantic clustering algorithm can be realized only by completing the unsupervised training of the semantic clustering algorithm model through training data.
And 203, displaying the target expression package data, allowing the user account to select target expression data from the target expression package data, and commenting the target multimedia resources.
The implementation manner of this step is similar to the implementation process of step 102 described above, and this embodiment of the present application is not described in detail here.
In summary, the method for commenting on multimedia resources provided by the embodiment of the application includes: responding to comment triggering operation of a user account on a target multimedia resource, and acquiring target expression packet data corresponding to the target multimedia resource; and displaying the target expression packet data, so that the user account can select the target expression data from the target expression packet data to comment on the target multimedia resource. According to the method and the device, under the condition that the comment triggering operation implemented on the target multimedia resource by the user account is responded, the corresponding target expression packet data are displayed for the target multimedia resource, so that the user can select the expression data and comment the target multimedia resource, the tedious operations such as user searching or searching are not needed, the expression data more suitable for commenting the multimedia resource can be quickly provided for the user, and the comment experience of the user is improved.
Fig. 4 is a block diagram of a comment apparatus for a multimedia resource according to an embodiment of the present application, and as shown in fig. 4, the comment apparatus includes:
the obtaining module 301 is configured to, in response to a comment triggering operation performed on a target multimedia resource by a user account, obtain target expression package data corresponding to the target multimedia resource.
Optionally, the obtaining module 301 includes:
an obtaining sub-module configured to obtain a resource type of the target multimedia resource;
optionally, the obtaining sub-module includes:
a second extracting unit, configured to extract resource attribute information of the target multimedia resource, where the resource attribute information includes at least one of a resource title or a resource description of the multimedia resource;
the analysis unit is configured to analyze the resource information to obtain a corresponding resource label;
and the second selecting unit is configured to select a candidate resource type with semantic similarity larger than a similarity threshold value with the resource label from a preset candidate resource type set according to the resource label as the resource type of the target multimedia resource.
Optionally, the obtaining sub-module includes: a third extraction unit, configured to extract a target description text of the target multimedia resource, wherein the target description text comprises one or more of a content expression text and a title of the target multimedia resource;
a fourth extraction unit configured to extract a word segmentation of the target description text and take the word segmentation as a description label;
a similarity obtaining unit configured to obtain semantic similarity between the description label and the preset classification category;
a fifth determining unit, configured to determine, as the resource type of the target multimedia resource, a classification category of which the semantic similarity with the description tag is greater than or equal to a preset similarity threshold.
Optionally, the obtaining sub-module includes:
a third determining unit, configured to identify each resource image frame included in the target multimedia resource, and determine a target object presented in the target multimedia resource and an association relationship between the target objects;
a fourth determining unit, configured to determine a resource type of the multimedia resource according to the object attribute of the target object, the association relationship between the target objects, and the resource attribute information of the multimedia resource; the object attribute at least comprises an object type or an object name, and the resource attribute information at least comprises one of a resource title or a resource description of the multimedia resource.
And the selection submodule is configured to select corresponding target expression packet data according to the resource type of the target multimedia resource.
Optionally, the selecting sub-module includes:
the first determining unit is configured to determine a target expression package category corresponding to the resource type of the target multimedia resource from a preset corresponding relationship list, wherein the corresponding relationship list comprises a corresponding relationship between the resource type and the expression package category;
the first selecting unit is configured to select target expression packet data corresponding to the target expression packet type from a preset expression packet database, wherein each expression packet data included in the expression packet database has a corresponding expression packet type.
Optionally, the first selecting unit includes:
the first selecting subunit is configured to select all expression package data corresponding to the target expression package category from the expression package database as candidate expression package data;
the second selecting subunit is configured to select, from the candidate expression package data, candidate expression package data with usage times greater than or equal to a first preset threshold as the target expression package data according to a first historical usage record of the candidate expression package data, where the first historical usage record includes usage times of each candidate expression package data by the user account.
Optionally, the first selecting unit further includes:
an obtaining subunit, configured to obtain current user representation data of the user account and other user representation data of other user accounts except the current user, where the current user representation data at least includes one of an age, a gender, a region, a professional label, and an interest label of the user account, and the other user representation data at least includes one of an age, a gender, a region, a professional label, and an interest label of the other user accounts;
a determining subunit, configured to determine, as a similar user account of the user accounts, other user accounts corresponding to other user portrait data whose similarity between the current user portrait data is greater than or equal to a preset similarity threshold;
a third selecting subunit configured to select, from the candidate expression package data, candidate expression package data with a use frequency greater than or equal to a second preset threshold as the target expression package data according to a second historical use record of the candidate expression package data, where the second historical use record includes the use frequency of the similar user account for each candidate expression package data.
Optionally, the selecting sub-module further includes:
a second determining unit, configured to determine a popular resource type corresponding to each expression package data according to a third history use record of each expression package data in the expression package database; the popular resource type is a resource type of a multimedia resource with the most comment times of the corresponding expression package data; the third history usage record includes the number of times the expression package data is used for comments;
a replacing unit, configured to replace the hit resource type with the resource type corresponding to the expression package data in the corresponding relationship list when the resource type corresponding to the expression package data is inconsistent with the hit resource type in the corresponding relationship list.
And the selection submodule is configured to select corresponding target expression packet data according to the resource type of the target multimedia resource.
Optionally, the selecting sub-module further includes:
a first extraction unit configured to extract an emotion packet label of each emotion packet data in the emotion packet database to obtain an emotion packet label set;
and the clustering unit is configured to perform semantic clustering on the expression packet label set, determine expression packet classes and corresponding relations between different expression packet labels and the expression packet classes, and correspondingly obtain the expression packet classes of each expression packet data in the expression packet database.
A presentation module 302 configured to present the target expression package data, so that the user account selects target expression data from the target expression package data to comment on the target multimedia resource.
To sum up, the apparatus for commenting on multimedia resources provided by the embodiment of the present application includes: the obtaining module is configured to respond to comment triggering operation of a user account on a target multimedia resource and obtain target expression packet data corresponding to the target multimedia resource; and the display module is configured to display the target expression package data, so that the user account can select the target expression data from the target expression package data to comment on the target multimedia resource. According to the method and the device, under the condition that the comment triggering operation implemented on the target multimedia resource by the user account is responded, the corresponding target expression packet data are displayed for the target multimedia resource, so that the user can select the expression data and comment the target multimedia resource, the tedious operations such as user searching or searching are not needed, the expression data more suitable for commenting the multimedia resource can be quickly provided for the user, and the comment experience of the user is improved.
Fig. 5 is a block diagram illustrating an electronic device 600 according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an interface to input/output (I/O) 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is used to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power supply component 606 provides power to the various components of electronic device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen that provides an output interface between the electronic device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is used to output and/or input audio signals. For example, the audio component 610 may include a Microphone (MIC) for receiving external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor component 614 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 614 may also detect a change in the position of the electronic device 600 or a component of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the electronic device 600, and a change in the temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is operable to facilitate wired or wireless communication between the electronic device 600 and other devices. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for implementing the above-described method for commenting on multimedia resources.
In an exemplary embodiment, a non-transitory storage medium including instructions, such as the memory 604 including instructions, executable by the processor 620 of the electronic device 600 to perform the above-described method is also provided. For example, the non-transitory storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Fig. 6 is a block diagram illustrating an electronic device 700 according to an example embodiment. For example, the electronic device 700 may be provided as a server. Referring to fig. 6, electronic device 700 includes a processing component 722 that further includes one or more processors, and memory resources, represented by memory 732, for storing instructions, such as applications, that are executable by processing component 722. The application programs stored in memory 732 may include one or more modules that each correspond to a set of instructions. Further, the processing component 722 is configured to execute instructions to perform the above-described method of commenting on multimedia assets.
The electronic device 700 may also include a power component 726 that is configured to perform power management of the electronic device 700, a wired or wireless network interface 750 that is configured to connect the electronic device 700 to a network, and an input output (I/O) interface 758. The electronic device 700 may operate based on an operating system, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like, stored in memory 732.
The embodiment of the application also provides an application program, and when the application program is executed by a processor of an electronic device, the method for commenting the multimedia resources is realized.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (10)

1. A method for commenting on a multimedia asset, the method comprising:
responding to comment triggering operation of a user account on a target multimedia resource, and acquiring target expression packet data corresponding to the target multimedia resource;
and displaying the target expression packet data, allowing the user account to select target expression data from the target expression packet data, and commenting the target multimedia resources.
2. The method of claim 1, wherein the step of obtaining the target expression package data corresponding to the target multimedia resource comprises:
acquiring the resource type of the target multimedia resource;
and selecting corresponding target expression packet data according to the resource type of the target multimedia resource.
3. The method of claim 2, wherein the step of selecting the corresponding target expression package data according to the resource type of the target multimedia resource comprises:
determining a target expression package category corresponding to the resource type of the target multimedia resource from a preset corresponding relation list, wherein the corresponding relation list comprises the corresponding relation between the resource type and the expression package category;
and selecting target expression packet data corresponding to the target expression packet type from a preset expression packet database, wherein each expression packet data in the expression packet database has a corresponding expression packet type.
4. The method of claim 3, wherein the step of selecting the target expression package data corresponding to the target expression package category from a preset expression package database comprises:
selecting all expression packet data corresponding to the target expression packet type from the expression packet database as candidate expression packet data;
selecting candidate expression package data with the use times larger than or equal to a first preset threshold value from the candidate expression package data as the target expression package data according to a first historical use record of the candidate expression package data, wherein the first historical use record comprises the use times of the user account on each candidate expression package data.
5. The method of claim 4, further comprising:
acquiring current user representation data of the user account and other user representation data of other user accounts except the current user, wherein the current user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the user account, and the other user representation data at least comprises one of age, gender, region, occupation tag and interest tag of the other user accounts;
determining other user accounts corresponding to other user portrait data with the similarity between the current user portrait data and the preset similarity threshold value or more as similar user accounts of the user accounts;
selecting candidate expression package data with the use times larger than or equal to a second preset threshold value from the candidate expression package data according to a second historical use record of the candidate expression package data, wherein the second historical use record comprises the use times of the similar user account on each candidate expression package data.
6. The method of claim 3, further comprising:
determining a hot resource type corresponding to each expression package data according to the third history use record of each expression package data in the expression package data base; the popular resource type is a resource type of a multimedia resource of which the corresponding expression package data is used for the most comment times; the third history usage record includes the number of times the expression package data is used for comments;
and under the condition that the resource type corresponding to the expression package data is not consistent with the hot resource type in the corresponding relation list, replacing the hot resource type with the resource type corresponding to the expression package data in the corresponding relation list.
7. The method of claim 3, further comprising:
extracting an expression packet label of each expression packet data in the expression packet database to obtain an expression packet label set;
semantic clustering is carried out on the expression packet label set, expression packet classes and corresponding relations between different expression packet labels and the expression packet classes are determined, and the expression packet classes of each expression packet data in the expression packet database are correspondingly obtained.
8. An apparatus for commenting a multimedia resource, the apparatus comprising:
the obtaining module is configured to respond to comment triggering operation of a user account on a target multimedia resource and obtain target expression packet data corresponding to the target multimedia resource;
and the display module is configured to display the target expression package data, so that the user account can select the target expression data from the target expression package data to comment on the target multimedia resource.
9. An electronic device, characterized in that it comprises a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program, when executed by said processor, implementing the steps of the method for commenting on multimedia resources according to any of claims 1 to 7.
10. A storage medium, characterized in that it has stored thereon a computer program which, when being executed by a processor, carries out the steps of a method for commenting on a multimedia asset as claimed in any one of claims 1 to 7.
CN202010044367.9A 2020-01-15 2020-01-15 Multimedia resource commenting method and device, electronic equipment and storage medium Pending CN111258435A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202010044367.9A CN111258435A (en) 2020-01-15 2020-01-15 Multimedia resource commenting method and device, electronic equipment and storage medium
US17/123,507 US11394675B2 (en) 2020-01-15 2020-12-16 Method and device for commenting on multimedia resource
EP21151561.4A EP3852044A1 (en) 2020-01-15 2021-01-14 Method and device for commenting on multimedia resource

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010044367.9A CN111258435A (en) 2020-01-15 2020-01-15 Multimedia resource commenting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111258435A true CN111258435A (en) 2020-06-09

Family

ID=70948915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010044367.9A Pending CN111258435A (en) 2020-01-15 2020-01-15 Multimedia resource commenting method and device, electronic equipment and storage medium

Country Status (3)

Country Link
US (1) US11394675B2 (en)
EP (1) EP3852044A1 (en)
CN (1) CN111258435A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111782968A (en) * 2020-07-02 2020-10-16 北京字节跳动网络技术有限公司 Content recommendation method and device, readable medium and electronic equipment
CN111783013A (en) * 2020-06-28 2020-10-16 百度在线网络技术(北京)有限公司 Comment information publishing method, device, equipment and computer-readable storage medium
CN112541120A (en) * 2020-12-21 2021-03-23 北京百度网讯科技有限公司 Recommendation comment generation method, device, equipment, medium and computer program product
CN113127628A (en) * 2021-04-23 2021-07-16 北京达佳互联信息技术有限公司 Method, device, equipment and computer-readable storage medium for generating comments
CN113342221A (en) * 2021-05-13 2021-09-03 北京字节跳动网络技术有限公司 Comment information guiding method and device, storage medium and electronic equipment
CN113377975A (en) * 2021-06-18 2021-09-10 北京字节跳动网络技术有限公司 Multimedia resource processing method and device, computer equipment and storage medium
CN113821574A (en) * 2021-08-31 2021-12-21 北京达佳互联信息技术有限公司 User behavior classification method and device and storage medium
CN114338587A (en) * 2021-12-24 2022-04-12 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109873756B (en) * 2019-03-08 2020-04-03 百度在线网络技术(北京)有限公司 Method and apparatus for transmitting information
CN115292600A (en) * 2022-08-15 2022-11-04 北京字跳网络技术有限公司 Information display method, device, equipment and medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140279418A1 (en) * 2013-03-15 2014-09-18 Facebook, Inc. Associating an indication of user emotional reaction with content items presented by a social networking system
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN106484139A (en) * 2016-10-19 2017-03-08 北京新美互通科技有限公司 Emoticon recommends method and device
CN107729917A (en) * 2017-09-14 2018-02-23 北京奇艺世纪科技有限公司 The sorting technique and device of a kind of title
CN108073671A (en) * 2017-04-12 2018-05-25 北京市商汤科技开发有限公司 Business object recommends method, apparatus and electronic equipment
CN110519617A (en) * 2019-07-18 2019-11-29 平安科技(深圳)有限公司 Video comments processing method, device, computer equipment and storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9043196B1 (en) 2014-07-07 2015-05-26 Machine Zone, Inc. Systems and methods for identifying and suggesting emoticons
US20160306438A1 (en) * 2015-04-14 2016-10-20 Logitech Europe S.A. Physical and virtual input device integration
US11025565B2 (en) * 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
CA3009758A1 (en) 2015-12-29 2017-07-06 Mz Ip Holdings, Llc Systems and methods for suggesting emoji
WO2018128214A1 (en) 2017-01-05 2018-07-12 Platfarm Inc. Machine learning based artificial intelligence emoticon service providing method
EP3625967A1 (en) * 2017-12-14 2020-03-25 Rovi Guides, Inc. Systems and methods for aggregating related media content based on tagged content
US20200073485A1 (en) * 2018-09-05 2020-03-05 Twitter, Inc. Emoji prediction and visual sentiment analysis
US11140099B2 (en) * 2019-05-21 2021-10-05 Apple Inc. Providing message response suggestions
US11416539B2 (en) * 2019-06-10 2022-08-16 International Business Machines Corporation Media selection based on content topic and sentiment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140279418A1 (en) * 2013-03-15 2014-09-18 Facebook, Inc. Associating an indication of user emotional reaction with content items presented by a social networking system
CN104933113A (en) * 2014-06-06 2015-09-23 北京搜狗科技发展有限公司 Expression input method and device based on semantic understanding
CN106484139A (en) * 2016-10-19 2017-03-08 北京新美互通科技有限公司 Emoticon recommends method and device
CN108073671A (en) * 2017-04-12 2018-05-25 北京市商汤科技开发有限公司 Business object recommends method, apparatus and electronic equipment
CN107729917A (en) * 2017-09-14 2018-02-23 北京奇艺世纪科技有限公司 The sorting technique and device of a kind of title
CN110519617A (en) * 2019-07-18 2019-11-29 平安科技(深圳)有限公司 Video comments processing method, device, computer equipment and storage medium

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783013A (en) * 2020-06-28 2020-10-16 百度在线网络技术(北京)有限公司 Comment information publishing method, device, equipment and computer-readable storage medium
CN111782968A (en) * 2020-07-02 2020-10-16 北京字节跳动网络技术有限公司 Content recommendation method and device, readable medium and electronic equipment
CN111782968B (en) * 2020-07-02 2022-02-18 北京字节跳动网络技术有限公司 Content recommendation method and device, readable medium and electronic equipment
CN112541120A (en) * 2020-12-21 2021-03-23 北京百度网讯科技有限公司 Recommendation comment generation method, device, equipment, medium and computer program product
CN112541120B (en) * 2020-12-21 2023-06-27 北京百度网讯科技有限公司 Recommendation comment generation method, device, equipment and medium
CN113127628A (en) * 2021-04-23 2021-07-16 北京达佳互联信息技术有限公司 Method, device, equipment and computer-readable storage medium for generating comments
CN113127628B (en) * 2021-04-23 2024-03-19 北京达佳互联信息技术有限公司 Method, apparatus, device and computer readable storage medium for generating comments
CN113342221A (en) * 2021-05-13 2021-09-03 北京字节跳动网络技术有限公司 Comment information guiding method and device, storage medium and electronic equipment
CN113377975A (en) * 2021-06-18 2021-09-10 北京字节跳动网络技术有限公司 Multimedia resource processing method and device, computer equipment and storage medium
CN113821574A (en) * 2021-08-31 2021-12-21 北京达佳互联信息技术有限公司 User behavior classification method and device and storage medium
CN114338587A (en) * 2021-12-24 2022-04-12 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium
CN114338587B (en) * 2021-12-24 2024-03-12 北京达佳互联信息技术有限公司 Multimedia data processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
US20210218696A1 (en) 2021-07-15
EP3852044A1 (en) 2021-07-21
US11394675B2 (en) 2022-07-19

Similar Documents

Publication Publication Date Title
CN111258435A (en) Multimedia resource commenting method and device, electronic equipment and storage medium
CN111638832A (en) Information display method, device, system, electronic equipment and storage medium
KR20160054392A (en) Electronic apparatus and operation method of the same
CN108227950B (en) Input method and device
CN109819288B (en) Method and device for determining advertisement delivery video, electronic equipment and storage medium
CN111556366A (en) Multimedia resource display method, device, terminal, server and system
CN109257645A (en) Video cover generation method and device
CN110147467A (en) A kind of generation method, device, mobile terminal and the storage medium of text description
CN110175223A (en) A kind of method and device that problem of implementation generates
CN111309940A (en) Information display method, system, device, electronic equipment and storage medium
CN112672208A (en) Video playing method, device, electronic equipment, server and system
CN112464031A (en) Interaction method, interaction device, electronic equipment and storage medium
CN112069951A (en) Video clip extraction method, video clip extraction device, and storage medium
CN110110204A (en) A kind of information recommendation method, device and the device for information recommendation
CN113157972B (en) Recommendation method and device for video cover document, electronic equipment and storage medium
CN112328809A (en) Entity classification method, device and computer readable storage medium
CN112015277A (en) Information display method and device and electronic equipment
US11922725B2 (en) Method and device for generating emoticon, and storage medium
CN112000266B (en) Page display method and device, electronic equipment and storage medium
CN111355999B (en) Video playing method and device, terminal equipment and server
CN114302231A (en) Video processing method and device, electronic equipment and storage medium
CN110662103B (en) Multimedia object reconstruction method and device, electronic equipment and readable storage medium
CN110730382B (en) Video interaction method, device, terminal and storage medium
CN112000877A (en) Data processing method, device and medium
CN113259754A (en) Video generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination