CN111353071A - Label generation method and device - Google Patents

Label generation method and device Download PDF

Info

Publication number
CN111353071A
CN111353071A CN201811481612.1A CN201811481612A CN111353071A CN 111353071 A CN111353071 A CN 111353071A CN 201811481612 A CN201811481612 A CN 201811481612A CN 111353071 A CN111353071 A CN 111353071A
Authority
CN
China
Prior art keywords
label
video
preset word
input
entry
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811481612.1A
Other languages
Chinese (zh)
Inventor
王智
杨莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201811481612.1A priority Critical patent/CN111353071A/en
Publication of CN111353071A publication Critical patent/CN111353071A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Library & Information Science (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The disclosure relates to a label generation method and a device, wherein the method is applied to a server and comprises the following steps: acquiring an entry label from a terminal, wherein the entry label represents a label created for a video by a user; determining whether the input label meets a quality condition; and when the input label meets the quality condition, setting the input label as the label of the video. By evaluating the quality of the input label created by the user for the video, the label generation method and the label generation device according to the embodiment of the disclosure can expand the distribution range of the video.

Description

Label generation method and device
Technical Field
The present disclosure relates to the field of multimedia technologies, and in particular, to a tag generation method and apparatus.
Background
With the development of network multimedia technology, video content is more and more. The label of the video refers to a phrase for describing the characteristics of the video, so that a browser can search the video more accurately.
In the related art, when a user uploads a video, a label can be created for the video, but the label created by the user is created by the user depending on the understanding of the user on the video, the quality is not guaranteed, and the distribution effect of the video may be affected.
Disclosure of Invention
In view of this, the present disclosure provides a tag generation method and apparatus, which can expand the distribution range of videos.
According to a first aspect of the present disclosure, there is provided a tag generation method, which is applied to a server, the method including: acquiring an entry label from a terminal, wherein the entry label represents a label created for a video by a user; determining whether the input label meets a quality condition; and when the input label meets the quality condition, setting the input label as the label of the video.
According to a second aspect of the present disclosure, there is provided a tag generation method, which is applied to a terminal, the method including: acquiring an entry label, wherein the entry label represents a label created by a user for a video; displaying first prompt information aiming at the label, wherein the first prompt information comprises a prompt that the input label does not meet the quality condition, and recommending a first preset word as the label of the video.
According to a third aspect of the present disclosure, there is provided a tag generation apparatus, which is applied to a server, the apparatus including: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an entry label from a terminal, and the entry label represents a label created by a user for a video; the determining module is used for determining whether the input label meets the quality condition; and the first setting module is used for setting the input label as the label of the video when the input label meets the quality condition.
According to a fourth aspect of the present disclosure, there is provided a tag generation apparatus, which is applied to a terminal, the apparatus including: the acquisition module is used for acquiring an input label, wherein the input label represents a label created by a user for the video; and the display module is used for displaying first prompt information aiming at the label, wherein the first prompt information comprises a prompt that the input label does not meet the quality condition, and a first preset word recommended to be used as the label of the video.
In the embodiment of the disclosure, the quality evaluation is performed on the input label created by the user for the video, and when the quality of the input label meets the requirement, the input label is set as the label of the video, so that the quality of the input label is ensured, the video is conveniently searched, and the distribution range of the video is expanded.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a tag generation method according to an embodiment of the present disclosure.
Fig. 2 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure.
Fig. 3 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure.
Fig. 4 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure.
Fig. 5 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure.
Fig. 6 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure.
Fig. 7a shows a schematic diagram of an example of a video upload page according to an embodiment of the present disclosure.
Fig. 7b shows a schematic diagram of an example of a video upload page according to an embodiment of the present disclosure.
Fig. 7c shows a schematic diagram of an example of a video upload page according to an embodiment of the present disclosure.
Fig. 8 shows a block diagram of a tag generation apparatus according to an embodiment of the present disclosure.
Fig. 9 shows a block diagram of a tag generation apparatus according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 shows a flow chart of a tag generation method according to an embodiment of the present disclosure. The method may be applied to a server. As shown in fig. 1, the method may include:
step S11, obtaining an entry label from the terminal, where the entry label represents a label created by the user for the video.
Step S12, determining whether the entry label satisfies a quality condition.
And step S13, when the entry label meets the quality condition, setting the entry label as the label of the video.
In the embodiment of the disclosure, the quality evaluation is performed on the input label created by the user for the video, and when the quality of the input label meets the requirement, the input label is set as the label of the video, so that the quality of the input label is ensured, the video is conveniently searched, and the distribution range of the video is expanded.
In step S11, the entry label may represent a label created by the user for the video. In a possible implementation manner, when a user uploads a video at a terminal, an entry label can be created for the video, and after the terminal acquires the entry label, the entry label can be sent to a server.
In step S12, the quality condition may be used to evaluate the quality of the entered label. In one possible implementation, the quality condition may be that the video coverage of the entry tag is greater than a first threshold, and/or the search heat of the entry tag is greater than a second threshold.
The first threshold and the second threshold may be set as needed, and the disclosure is not limited thereto.
The video coverage of a tag may be determined from the ratio of the number of videos searched by the tag to the total number of videos in the search range. For the enter tag, the video coverage of the enter tag may be the ratio of the number of videos searched by the enter tag to the total number of videos in the search range. The search range may be set as required, and may be, for example, one or more video websites, or one or more categories in one video website, which is not limited in this disclosure. Taking movie-like videos in a certain video website as an example of a search range, the video coverage of the entry tag may be a ratio of the number of movies searched by the entry tag in the video website to the total number of movie-like videos in the video website.
The search popularity of a tag may be determined according to the number of times the tag is searched within a specified time period. For the entry tag, the search heat of the entry tag may be determined according to the number of times the entry tag is searched within a specified time period. The specified time period may be set as required, for example, may be the last week or the last month, and the like, and the disclosure is not limited thereto.
In one example, the search frequency of the entry tag is positively correlated with the number of times that the entry tag is searched in a specified time period, the more the number of times that the entry tag is searched in the specified time period is, the greater the frequency of the entry tag is, the fewer the number of times that the entry tag is searched in the specified time period is, and the lower the frequency of the entry tag is.
In yet another example, the search popularity of the entry tag may be determined from the number of times the entry tag was searched and the weight of the entry tag over a specified time period. The weight can represent the popularity degree of the input label, the larger the weight is, the more popular the input label is, the weight can be set according to the requirement, for example, in a discount season of shopping, and a larger weight can be set for the label related to the commodity recommendation; a new movie is shown and a label with a segment from the movie may be given a greater weight.
In one possible implementation, step S12 may include: and if the video coverage rate of the input label is greater than a first threshold value and/or the search heat degree of the input label is greater than a second threshold value, determining that the input label meets the quality condition.
When the video coverage rate of the input label is larger than the first threshold value, the fact that more videos can be searched by searching the input label in the specified range can be shown, and the search coverage area of the input label is wide. Therefore, when the video coverage of the incoming label is greater than the first threshold, the server may determine that the quality of the incoming label is high, satisfying the quality condition.
When the search popularity of the input label is greater than the second threshold, the input label can be searched for more times within a specified time period, and the popularity of the input label is higher. Therefore, when the search heat of the entry label is greater than the second threshold, the server may determine that the quality of the entry label is high, and the quality condition is satisfied.
In step S13, when the entry tag satisfies the quality condition, the server may set the entry tag as a tag of the video.
Because the matching degree of the input label created by the user for the video and the video content is high, when the input label meets the quality condition, the server sets the input label as the label of the video, the accuracy of the label can be ensured, the quality of the label can be ensured, and the distribution range of the video is favorably expanded.
Fig. 2 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure. As shown in fig. 2, the method may further include:
step S14, if the input label does not satisfy the quality condition, acquiring a first preset word matched with the input label from a preset word bank, wherein the preset word bank represents a word bank formed by preset words satisfying the quality condition.
In a possible implementation manner, if the video coverage rate of the entry label is less than or equal to a first threshold and the search heat of the entry label is greater than a second threshold, it is determined that the entry label does not satisfy the quality condition.
When the video coverage rate of the input label is less than or equal to a first threshold value, the search coverage of the input label can be indicated to be narrow; when the search popularity of the entry label is less than or equal to the second threshold, it may indicate that the entry label is less popular. Under the condition that the search coverage of the input label is narrow and the popular degree of the input label is low, the possibility that the input label is searched is low, and the input label is set as the label of the video, so that the distribution of the video is not facilitated. Therefore, when the entry tag does not satisfy the quality condition, the server needs to acquire other words as the tag of the video instead of taking the entry tag as the tag of the video.
The preset lexicon may represent a lexicon composed of preset words satisfying a quality condition. In a possible implementation manner, the server may add, as preset words, the words whose video coverage is greater than the first threshold and the words whose search heat is greater than the second threshold to the preset word bank.
The first preset word may represent a preset word in a preset word bank that matches the entry tag. In one possible implementation, the first preset word may be a synonym or a synonym of the entry tag. The obtaining of the first preset word matched with the entry tag from the preset word library in step S14 may include: and determining the similar meaning words or the synonyms of the input labels in the preset word stock as the first preset words.
In one possible implementation manner, in step S14, the obtaining the first preset word matching the entry tag from the preset word library may include: splitting the input label into a plurality of sub labels; and acquiring a first preset word matched with each sub-label from a preset word bank.
When the number of words of the input label is large, the server may split the input label into a plurality of sub-labels, and obtain a first preset word matched with each sub-label from a preset word bank, for example, obtain a synonym or a near-synonym of each sub-label as the first preset word from the preset word bank. In one example, the entry label may be segmented using a segmentation algorithm, and the segmentation result is used as a sub-label.
And step S15, setting the first preset word as the label of the video.
Because the first preset word is matched with the input label, the first preset word can be better matched with the content of the video, and meanwhile, the first preset word meets the quality condition, so that the accuracy of the label can be ensured and the quality of the label can be ensured by setting the first preset word as the label of the video, and the distribution range of the video is favorably expanded.
In a possible implementation manner, when the wrongly written characters exist in the input label, the server may correct the wrongly written characters on the input label, and then determine whether the input label with the wrongly written characters corrected meets the quality condition. If the input label of the corrected wrongly written character meets the quality condition, setting the input label of the corrected wrongly written character as a label of the video; and if the input label of the corrected wrongly-written character does not meet the quality condition, acquiring a first preset word matched with the input label of the corrected wrongly-written character from a preset word bank, and setting the first preset word as the label of the video.
Fig. 3 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure. As shown in fig. 3, the setting of the first preset word as the label of the video in step S15 may include:
step S151, sending a first prompt message to the terminal, wherein the first prompt message comprises the first preset word.
Step S152, when receiving a confirmation message sent by the terminal in response to the first prompt message, setting the first preset word as the label of the video.
The first prompt message may be used to prompt the user to enter that the tag does not meet the quality condition and recommend an available tag to the user. In one possible implementation, the first prompt may be a first preset word.
When the terminal receives the first prompt message, the terminal can display first prompt information, and the first prompt information can include a prompt that an input label does not meet a quality condition and a first preset word recommended as a label of the video. The user may select whether to adopt the first preset word as a label of the video. When the terminal detects an indication message that the user confirms that the first preset word is selected as the label of the video, the terminal may send a confirmation message sent in response to the first prompt message to the server. When the server receives a confirmation message sent by the terminal in response to the first prompt message, the server can set the first preset word as the label of the video.
Therefore, the first preset word is sent to the terminal, when the quality of the input label is low, reference can be provided for a user to modify the label, the quality of the label of the video can be improved, and the video distribution range is expanded.
Fig. 4 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure. As shown in fig. 4, the step S12 of determining whether the entry label satisfies the quality condition may include:
step S121, when the security audit of the input label passes, determining whether the input label meets the quality condition.
In embodiments of the present disclosure, the server may audit the security of the tag before determining the quality of the tag. For example, the server may determine whether sensitive or high-risk vocabulary is contained in the entry tag, and when sensitive and/or high-risk vocabulary is contained in the entry tag, may determine that the security audit of the entry tag fails.
Upon passing the security audit of the incoming label, the server may determine whether the incoming label satisfies a quality condition.
In a possible implementation manner, when the security audit of the input label is not passed, a second prompt message is sent to the terminal.
Wherein the second prompt message may be used to indicate that the security audit of the entered label failed. In one example, the second prompt message may include a word in the entry tag that caused the entry to exhibit a failure to pass the security audit, and a reason for the failure to pass the security audit. And when the terminal receives the second prompt message, displaying second prompt information, wherein the second prompt information can indicate that the security audit of the input label is not passed. When the terminal displays the second prompt message, the label can be marked in a way of thickening or marking red, so that a word which shows that the security audit is not passed is input, and the reason of the failure of the security audit is displayed.
Fig. 5 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure. As shown in fig. 5, the method may further include:
and step S16, receiving the video sent by the terminal.
Step S17, analyzing the content of the video through AI, and acquiring a keyword describing the content of the video.
And step S18, acquiring a second preset word matched with the keyword from a preset word bank, wherein the preset word bank represents a word bank formed by preset words meeting quality conditions.
Step S19, sending a third prompt message to the terminal, where the third prompt message includes the second preset word.
Step S20, when receiving a confirmation message sent by the terminal in response to the third prompt message, setting the second preset word as the label of the video.
AI (Artificial Intelligence) can be used to analyze the content of a video. In the embodiment of the disclosure, the server may analyze the content of the video through the AI, and obtain a keyword describing the content of the video. In one example, the server may analyze images, texts, voices, objects, behaviors, scenes, and the like in the video through the AI, thereby obtaining keywords of the video content. For example, the server analyzes that the actor a and the actor B are included in the video and the towel and the water bottle are included in the scene through AI, and when the behavior includes the character falling, it can be determined that the video is a video related to the street dance program, and further, the name of the actor a, the name of the actor B, or the name of the street dance program, or words related to the street dance, etc. can be used as keywords of the video content.
The second preset word may represent a preset word in the preset word bank that matches the keyword of the content of the video. For example, the second preset word may be a synonym or a synonym of a keyword in the preset lexicon. The second preset word is a word in the preset word bank, so that the quality of the second preset word is higher. The second preset word is matched with the keywords of the content of the video, so that the second preset word can be better matched with the content of the video. Therefore, if the second preset word is set as the label of the video, the accuracy of the label can be ensured, the quality of the label can be ensured, and the distribution range of the video is expanded.
The third prompting message may be used to recommend an available tag to the user according to the content of the video, and the third prompting message may include a second preset word.
When the terminal receives the third prompt message, the terminal may display third prompt information, where the third prompt information may include a second preset word recommended as a tag of the video. The user may select whether to adopt the second preset word as a label for the video. When the terminal detects that the user confirms the indication message for selecting the second preset word as the label of the video, the terminal may send a confirmation message sent in response to the third prompt message to the server. When the server receives a confirmation message sent by the terminal in response to the third prompt message, the server may set the second preset word as the label of the video.
Therefore, the matching degree of the recommended labels and the video content can be improved by sending the second preset words to the terminal, and references are provided for the user to create the labels, so that the quality of the labels of the videos is improved, and the video distribution range is expanded.
Fig. 6 shows a flow diagram of a tag generation method according to an embodiment of the present disclosure. The method can be applied to a terminal. As shown in fig. 6, the method may include:
step S31, an entry label is obtained, which represents a label created by the user for the video.
Step S32, displaying first prompt information aiming at the label, wherein the first prompt information comprises a prompt that the input label does not meet the quality condition, and a first preset word recommended to be used as the label of the video.
Step S11 may be referred to for entering a label, and step S151 may be referred to for the first prompt information, which is not described herein again.
In the embodiment of the disclosure, by displaying the first prompt information, when the quality of the input label is low, a reference can be provided for a user to modify the label, which is beneficial to improving the quality of the label of the video, so that the video distribution range is expanded.
In one possible implementation, if the video coverage of the entry tag is less than or equal to a first threshold and/or the search heat of the entry tag is less than or equal to a second threshold, the entry tag does not satisfy the quality condition,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
In one possible implementation, the first preset word is a synonym or a synonym of the entry label.
In a possible implementation manner, the terminal may display second prompt information, where the second prompt information indicates that the security audit of the entered label is not passed.
Step S121 may be referred to as the second prompt information, which is not described herein again.
In the embodiment of the disclosure, the user can be prompted to enter the security problem of the label by displaying the second prompt message, which is helpful for the user to modify the label.
In one possible implementation manner, the terminal may display third prompt information, where the third prompt information includes a second preset word recommended as a tag of the video, and the second preset word is determined by analyzing the content of the video through AI.
The third prompting message may refer to step S19, which is not described herein.
In the embodiment of the disclosure, by displaying the third prompt information, the tag can be recommended to the user based on the video content, the matching degree of the recommended tag and the video content is improved, a reference is provided for the user to create the tag, the quality of the tag of the video is favorably improved, and the video distribution range is expanded.
In a possible implementation manner, when the terminal receives the first prompt message, the terminal may mark the entry tag as an unavailable tag and display the first prompt message.
In one possible implementation, when the entry label satisfies the quality condition, the server may send a message indicating that the entry label satisfies the quality condition to the terminal. The terminal can mark the input label as an available label when receiving the message indicating that the input label meets the quality condition.
In one possible implementation, the terminal may distinguish between available and unavailable tags in different colors or fonts. For example, the terminal may display the available tags in green, the unavailable tags in red, and so on.
Application example
Fig. 7a, 7b, and 7c respectively show schematic diagrams of examples of video upload pages according to an embodiment of the present disclosure. Fig. 7a shows a schematic diagram of an uploading page in the process of uploading a video. Fig. 7b and 7c show schematic diagrams of upload pages when the video upload is completed.
As shown in fig. 7a, the video upload page may include a video upload area and a video information editing area. Wherein the video upload area may be used for uploading video. The video information editing area may be used to edit a relevant area of the video.
As shown in fig. 7a, the video information editing area may include a title editing area, a category editing area, a tag editing area, a profile editing area, an original creation declaration area, and the like.
As shown in fig. 7b, the user may enter an entry label created for the video in the label edit area. The terminal can send the acquired entry label to the server. The server may determine whether the incoming label satisfies a quality condition. When the input label meets the quality condition, the server can send a message (namely, a first prompt message) to the terminal, wherein the message indicates that the input label does not meet the quality condition. As shown in fig. 7b, when the terminal receives the first prompt message, a prompt that the entry label does not satisfy the quality condition may be displayed next to the entry label that does not satisfy the quality condition, and a first preset word that is a label of the video may be recommended.
As shown in fig. 7c, the terminal may display a tag (i.e., a second preset word) recommended by the AI analysis of the content of the video near (e.g., below) the tag editing area.
In one example, as shown in fig. 7a, during the video uploading process, the video uploading area may display information such as the size, required time, and uploading progress of the video. As shown in fig. 7b and 7c, when the video upload is completed, the video upload area may display information such as the size of the video and the image quality of the video.
In one example, as shown in fig. 7a, during the video upload, the video upload area may provide a control for canceling the video upload. As shown in fig. 7b, the video upload area may provide a control for re-uploading the video when the video upload is complete.
In one example, the video upload page may also include a video cover region. As shown in fig. 7a, the video cover area may include controls for setting the cover. The user can set the cover of the video by triggering the control for setting the cover. As shown in FIG. 7b, upon completion of the cover setting, the video cover panel may display a preview of the cover of the video.
In one example, as shown in fig. 7a, the video upload page may further include a navigation area that may be used to expose navigation of various functional modules of the video upload system.
In one example, as shown in fig. 7a, the video upload page may also include a video publishing control, a save draft control, and the like.
In one example, the video upload page may also include an ancillary functions portal (not shown) for complaints, customer service, and the like.
Fig. 8 shows a block diagram of a tag generation apparatus according to an embodiment of the present disclosure. The apparatus may be applied to a server. As shown in fig. 8, the apparatus 80 may include:
a first obtaining module 81, configured to obtain an entry tag from a terminal, where the entry tag represents a tag created for a video by a user;
a determination module 82 for determining whether the entry label meets a quality condition;
and the first setting module 83 is configured to set the entry label as the label of the video when the entry label meets the quality condition.
In the embodiment of the disclosure, the quality evaluation is performed on the input label created by the user for the video, and when the quality of the input label meets the requirement, the input label is set as the label of the video, so that the quality of the input label is ensured, the video is conveniently searched, and the distribution range of the video is expanded.
In a possible implementation manner, the determining module is specifically configured to:
if the video coverage rate of the input label is larger than a first threshold value and/or the search heat degree of the input label is larger than a second threshold value, determining that the input label meets the quality condition,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring a first preset word matched with the input label from a preset word bank if the input label does not meet the quality condition, wherein the preset word bank represents a word bank formed by preset words meeting the quality condition;
and the second setting module is used for setting the first preset word as the label of the video.
In a possible implementation manner, the second obtaining module is specifically configured to:
and determining the similar meaning words or the synonyms of the input labels in the preset word stock as the first preset words.
In a possible implementation manner, the second obtaining module is further configured to:
splitting the input label into a plurality of sub labels;
and acquiring a first preset word matched with each sub-label from a preset word bank.
In a possible implementation manner, the second setting module is specifically configured to:
sending a first prompt message to the terminal, wherein the first prompt message comprises the first preset word;
and when receiving a confirmation message sent by the terminal in response to the first prompt message, setting the first preset word as the label of the video.
In a possible implementation manner, the determining module is specifically configured to:
and when the safety audit of the input label passes, determining whether the input label meets the quality condition.
In one possible implementation, the apparatus further includes:
and the first sending module is used for sending a second prompt message to the terminal when the security audit of the input label is not passed.
In one possible implementation, the apparatus further includes:
the receiving module is used for receiving the video sent by the terminal;
the third acquisition module is used for analyzing the content of the video through AI and acquiring keywords describing the content of the video;
the fourth acquisition module is used for acquiring a second preset word matched with the keyword from a preset word bank, wherein the preset word bank represents a word bank formed by preset words meeting quality conditions;
the second sending module is used for sending a third prompt message to the terminal, wherein the third prompt message comprises the second preset word;
and the third setting module is used for setting the second preset word as the label of the video when receiving a confirmation message sent by the terminal in response to the third prompt message.
Fig. 9 shows a block diagram of a tag generation apparatus according to an embodiment of the present disclosure. The apparatus can be applied to a terminal. As shown in fig. 9, the apparatus 90 may include:
an obtaining module 91, configured to obtain an entry tag, where the entry tag represents a tag created for a video by a user;
a first display module 92, configured to display first prompt information for the tag, where the first prompt information includes a prompt that the input tag does not satisfy the quality condition, and a first preset word recommended as the tag of the video.
In the embodiment of the disclosure, by displaying the first prompt information, when the quality of the input label is low, a reference can be provided for a user to modify the label, which is beneficial to improving the quality of the label of the video, so that the video distribution range is expanded.
In one possible implementation, if the video coverage of the entry tag is less than or equal to a first threshold and/or the search heat of the entry tag is less than or equal to a second threshold, the entry tag does not satisfy the quality condition,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
In one possible implementation, the first preset word is a synonym or a synonym of the entry label.
In one possible implementation, the apparatus further includes:
and the second display module is used for displaying second prompt information, and the second prompt information indicates that the security audit of the input label is not passed.
In one possible implementation, the apparatus further includes:
and the third display module is used for displaying third prompt information, wherein the third prompt information comprises a second preset word recommended to be used as a label of the video, and the second preset word is determined by analyzing the content of the video through AI.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (28)

1. A label generation method is applied to a server and comprises the following steps:
acquiring an entry label from a terminal, wherein the entry label represents a label created for a video by a user;
determining whether the input label meets a quality condition;
and when the input label meets the quality condition, setting the input label as the label of the video.
2. The method of claim 1, wherein determining whether the entry label satisfies a quality condition comprises:
if the video coverage rate of the input label is larger than a first threshold value and/or the search heat degree of the input label is larger than a second threshold value, determining that the input label meets the quality condition,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
3. The method of claim 1, further comprising:
if the input label does not meet the quality condition, acquiring a first preset word matched with the input label from a preset word bank, wherein the preset word bank represents a word bank formed by preset words meeting the quality condition;
and setting the first preset word as a label of the video.
4. The method of claim 3, wherein obtaining a first predetermined word matching the entry tag from a predetermined lexicon comprises:
and determining the similar meaning words or the synonyms of the input labels in the preset word stock as the first preset words.
5. The method of claim 3, wherein obtaining a first predetermined word matching the entry tag from a predetermined lexicon comprises:
splitting the input label into a plurality of sub labels;
and acquiring a first preset word matched with each sub-label from a preset word bank.
6. The method of claim 3, wherein setting the first preset word as a label of the video comprises:
sending a first prompt message to the terminal, wherein the first prompt message comprises the first preset word;
and when receiving a confirmation message sent by the terminal in response to the first prompt message, setting the first preset word as the label of the video.
7. The method of claim 1, wherein determining whether the entry label satisfies a quality condition comprises:
and when the safety audit of the input label passes, determining whether the input label meets the quality condition.
8. The method of claim 7, further comprising:
and when the security audit of the input label is not passed, sending a second prompt message to the terminal.
9. The method of claim 1, further comprising:
receiving the video sent by the terminal;
analyzing the content of the video through AI to obtain keywords describing the content of the video;
acquiring a second preset word matched with the keyword from a preset word bank, wherein the preset word bank represents a word bank formed by preset words meeting quality conditions;
sending a third prompt message to the terminal, wherein the third prompt message comprises the second preset word;
and when receiving a confirmation message sent by the terminal in response to the third prompt message, setting the second preset word as the label of the video.
10. A label generation method is applied to a terminal and comprises the following steps:
acquiring an entry label, wherein the entry label represents a label created by a user for a video;
displaying first prompt information aiming at the label, wherein the first prompt information comprises a prompt that the input label does not meet the quality condition, and recommending a first preset word as the label of the video.
11. The tag generation method according to claim 10, wherein the entry tag does not satisfy a quality condition if the video coverage of the entry tag is less than or equal to a first threshold and/or the search heat of the entry tag is less than or equal to a second threshold,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
12. The tag generation method according to claim 10, wherein the first preset word is a near-synonym or a synonym of the entry tag.
13. The label generation method according to claim 10, further comprising: and displaying second prompt information, wherein the second prompt information indicates that the security audit of the input label is not passed.
14. The label generation method according to claim 10, further comprising: displaying third prompt information, wherein the third prompt information includes a second preset word recommended as a label of the video, and the second preset word is determined by analyzing the content of the video through AI.
15. A label generation apparatus, applied to a server, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an entry label from a terminal, and the entry label represents a label created by a user for a video;
the determining module is used for determining whether the input label meets the quality condition;
and the first setting module is used for setting the input label as the label of the video when the input label meets the quality condition.
16. The apparatus of claim 15, wherein the determining module is specifically configured to:
if the video coverage rate of the input label is larger than a first threshold value and/or the search heat degree of the input label is larger than a second threshold value, determining that the input label meets the quality condition,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
17. The apparatus of claim 15, further comprising:
the second acquisition module is used for acquiring a first preset word matched with the input label from a preset word bank if the input label does not meet the quality condition, wherein the preset word bank represents a word bank formed by preset words meeting the quality condition;
and the second setting module is used for setting the first preset word as the label of the video.
18. The apparatus of claim 17, wherein the second obtaining module is specifically configured to:
and determining the similar meaning words or the synonyms of the input labels in the preset word stock as the first preset words.
19. The apparatus of claim 17, wherein the second obtaining module is further configured to:
splitting the input label into a plurality of sub labels;
and acquiring a first preset word matched with each sub-label from a preset word bank.
20. The apparatus of claim 17, wherein the second setting module is specifically configured to:
sending a first prompt message to the terminal, wherein the first prompt message comprises the first preset word;
and when receiving a confirmation message sent by the terminal in response to the first prompt message, setting the first preset word as the label of the video.
21. The apparatus of claim 15, wherein the determining module is specifically configured to:
and when the safety audit of the input label passes, determining whether the input label meets the quality condition.
22. The apparatus of claim 21, further comprising:
and the first sending module is used for sending a second prompt message to the terminal when the security audit of the input label is not passed.
23. The apparatus of claim 15, further comprising:
the receiving module is used for receiving the video sent by the terminal;
the third acquisition module is used for analyzing the content of the video through AI and acquiring keywords describing the content of the video;
the fourth acquisition module is used for acquiring a second preset word matched with the keyword from a preset word bank, wherein the preset word bank represents a word bank formed by preset words meeting quality conditions;
the second sending module is used for sending a third prompt message to the terminal, wherein the third prompt message comprises the second preset word;
and the third setting module is used for setting the second preset word as the label of the video when receiving a confirmation message sent by the terminal in response to the third prompt message.
24. A tag generation apparatus, applied to a terminal, the apparatus comprising:
the acquisition module is used for acquiring an input label, wherein the input label represents a label created by a user for the video;
the first display module is used for displaying first prompt information aiming at the label, wherein the first prompt information comprises a prompt that the input label does not meet the quality condition, and a first preset word recommended to be used as the label of the video.
25. The tag generation apparatus according to claim 24, wherein the entry tag does not satisfy a quality condition if the video coverage of the entry tag is less than or equal to a first threshold and/or the search heat of the entry tag is less than or equal to a second threshold,
the video coverage rate of the label is determined according to the ratio of the number of videos searched by the label to the total number of videos in the search range, and the search heat degree of the label is determined according to the number of times that the label is searched in a specified time period.
26. The tag generation apparatus according to claim 24, wherein the first preset word is a synonym or synonym of the entry tag.
27. The label generating apparatus according to claim 24, further comprising:
and the second display module is used for displaying second prompt information, and the second prompt information indicates that the security audit of the input label is not passed.
28. The label generating apparatus according to claim 24, further comprising:
and the third display module is used for displaying third prompt information, wherein the third prompt information comprises a second preset word recommended to be used as a label of the video, and the second preset word is determined by analyzing the content of the video through AI.
CN201811481612.1A 2018-12-05 2018-12-05 Label generation method and device Pending CN111353071A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811481612.1A CN111353071A (en) 2018-12-05 2018-12-05 Label generation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811481612.1A CN111353071A (en) 2018-12-05 2018-12-05 Label generation method and device

Publications (1)

Publication Number Publication Date
CN111353071A true CN111353071A (en) 2020-06-30

Family

ID=71193596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811481612.1A Pending CN111353071A (en) 2018-12-05 2018-12-05 Label generation method and device

Country Status (1)

Country Link
CN (1) CN111353071A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199526A (en) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 Method and device for issuing multimedia content, electronic equipment and storage medium
CN112395421A (en) * 2021-01-21 2021-02-23 平安科技(深圳)有限公司 Course label generation method and device, computer equipment and medium
CN113486197A (en) * 2021-06-28 2021-10-08 特赞(上海)信息科技有限公司 Multimedia label management method, device, equipment and storage medium
CN113921082A (en) * 2021-10-27 2022-01-11 云舟生物科技(广州)有限公司 Gene search weight adjustment method, computer storage medium, and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133877A (en) * 2014-07-25 2014-11-05 百度在线网络技术(北京)有限公司 Software label generation method and device
CN105138670A (en) * 2015-09-06 2015-12-09 天翼爱音乐文化科技有限公司 Audio file label generation method and system
CN105912682A (en) * 2016-04-14 2016-08-31 乐视控股(北京)有限公司 Video classification label generating method and device
WO2017071370A1 (en) * 2015-10-30 2017-05-04 华为技术有限公司 Label processing method and device
CN106920108A (en) * 2017-01-26 2017-07-04 武汉奇米网络科技有限公司 A kind of method and system of commodity typing
CN108228665A (en) * 2016-12-22 2018-06-29 阿里巴巴集团控股有限公司 Determine object tag, the method and device for establishing tab indexes, object search
CN108829800A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 A kind of search data processing method, equipment and computer readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104133877A (en) * 2014-07-25 2014-11-05 百度在线网络技术(北京)有限公司 Software label generation method and device
CN105138670A (en) * 2015-09-06 2015-12-09 天翼爱音乐文化科技有限公司 Audio file label generation method and system
WO2017071370A1 (en) * 2015-10-30 2017-05-04 华为技术有限公司 Label processing method and device
CN105912682A (en) * 2016-04-14 2016-08-31 乐视控股(北京)有限公司 Video classification label generating method and device
CN108228665A (en) * 2016-12-22 2018-06-29 阿里巴巴集团控股有限公司 Determine object tag, the method and device for establishing tab indexes, object search
CN106920108A (en) * 2017-01-26 2017-07-04 武汉奇米网络科技有限公司 A kind of method and system of commodity typing
CN108829800A (en) * 2018-05-29 2018-11-16 努比亚技术有限公司 A kind of search data processing method, equipment and computer readable storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199526A (en) * 2020-09-30 2021-01-08 北京字节跳动网络技术有限公司 Method and device for issuing multimedia content, electronic equipment and storage medium
WO2022068543A1 (en) * 2020-09-30 2022-04-07 北京字节跳动网络技术有限公司 Multimedia content publishing method and apparatus, and electronic device and storage medium
CN112199526B (en) * 2020-09-30 2023-03-14 抖音视界有限公司 Method and device for issuing multimedia content, electronic equipment and storage medium
CN112395421A (en) * 2021-01-21 2021-02-23 平安科技(深圳)有限公司 Course label generation method and device, computer equipment and medium
CN113486197A (en) * 2021-06-28 2021-10-08 特赞(上海)信息科技有限公司 Multimedia label management method, device, equipment and storage medium
CN113921082A (en) * 2021-10-27 2022-01-11 云舟生物科技(广州)有限公司 Gene search weight adjustment method, computer storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN106649316B (en) Video pushing method and device
CN111353071A (en) Label generation method and device
CN108509465B (en) Video data recommendation method and device and server
CN111767461B (en) Data processing method and device
CN111737522B (en) Video matching method, and block chain-based infringement evidence-saving method and device
JP5984917B2 (en) Method and apparatus for providing suggested words
CN112163122A (en) Method and device for determining label of target video, computing equipment and storage medium
KR101916874B1 (en) Apparatus, method for auto generating a title of video contents, and computer readable recording medium
CN109977366B (en) Catalog generation method and device
CN113852832B (en) Video processing method, device, equipment and storage medium
CN110287375B (en) Method and device for determining video tag and server
CN109582847B (en) Information processing method and device and storage medium
CN114896454B (en) Short video data recommendation method and system based on label analysis
CN110968689A (en) Training method of criminal name and law bar prediction model and criminal name and law bar prediction method
CN112733024A (en) Information recommendation method and device
CN110569429B (en) Method, device and equipment for generating content selection model
CN114845149B (en) Video clip method, video recommendation method, device, equipment and medium
CN113537215A (en) Method and device for labeling video label
CN111651981A (en) Data auditing method, device and equipment
KR102560610B1 (en) Reference video data recommend method for video creation and apparatus performing thereof
CN112069818A (en) Triple prediction model generation method, relation triple extraction method and device
CN116614652A (en) Advertisement video clip replacement method, device and storage medium in live broadcast scene
CN108460131B (en) Classification label processing method and device
CN109561350B (en) User interest degree evaluation method and system
CN116229313A (en) Label construction model generation method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination