CN111241341A - Video identification information processing method and video searching method, device and server - Google Patents

Video identification information processing method and video searching method, device and server Download PDF

Info

Publication number
CN111241341A
CN111241341A CN201811434120.7A CN201811434120A CN111241341A CN 111241341 A CN111241341 A CN 111241341A CN 201811434120 A CN201811434120 A CN 201811434120A CN 111241341 A CN111241341 A CN 111241341A
Authority
CN
China
Prior art keywords
video
identification information
search
information
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811434120.7A
Other languages
Chinese (zh)
Inventor
汪忠超
吴凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN201811434120.7A priority Critical patent/CN111241341A/en
Publication of CN111241341A publication Critical patent/CN111241341A/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the disclosure discloses a video identification information processing method, a video searching method, a device and a server. The video identification information processing method comprises the following steps: acquiring identification information of a second video; and updating the identification information of the first video according to the identification information of the second video, wherein the updated identification information of the first video at least comprises the identification information of part of the second video, and the first video and the second video have the same video content segment. The technical scheme disclosed by the embodiment of the disclosure can share the identification information of the videos with the same video content segment, so that the videos required by the user can be accurately searched out during video searching.

Description

Video identification information processing method and video searching method, device and server
Technical Field
The embodiment of the disclosure relates to the technical field of video processing, in particular to a video identification information processing method, a video searching method, a device and a server.
Background
With the development of the internet and intelligent terminals, various video services enrich the life, work and entertainment of users.
In the prior art, when a user searches for a video, the user searches for the video based on identification information (TAG) of the video, such as a name of the video, names of actors in the video, and description information of the video. However, since the identification information of the video is not uniform, the video corresponding to the actual video content and the identification information cannot be searched, which results in inaccuracy of video search.
Disclosure of Invention
The embodiment of the disclosure provides a video identification information processing method and a video searching method, device and server, which can share identification information of videos with the same video content segment, so that the videos required by a user can be accurately searched out during video searching.
In a first aspect, an embodiment of the present disclosure provides a method for processing identification information of a video, including:
acquiring identification information of a second video;
and updating the identification information of the first video according to the identification information of the second video, wherein the updated identification information of the first video at least comprises the identification information of part of the second video, and the first video and the second video have the same video content segment.
Optionally, updating the identification information of the first video according to the identification information of the second video specifically includes:
comparing the identification information of the first video with the identification information of the second video to obtain first difference identification information, wherein the first difference identification information is identification information which is included in the identification information of the second video and is not included in the identification information of the first video;
and adding the first difference identification information into the identification information of the first video to obtain the updated identification information of the first video.
Optionally, before acquiring the identification information of the second video, the method further includes:
it is confirmed that the first video and the second video have the same video content segment.
Optionally, the method further includes:
acquiring identification information of a third video;
and updating the identification information of the first video according to the identification information of the third video, wherein the updated identification information of the first video at least comprises the identification information of part of the third video, and the third video and the second video have the same video content segment.
Optionally, updating the identification information of the first video according to the identification information of the third video specifically includes:
comparing the identification information of the first video with the identification information of the third video to obtain second difference identification information, wherein the second difference identification information is identification information which is included in the identification information of the third video and is not included in the identification information of the first video;
and adding the second difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Optionally, the identification information is any one or a combination of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
Optionally, the same video content segment is a video content segment whose similarity value exceeds a preset threshold.
In a second aspect, an embodiment of the present disclosure provides a video search method, including:
acquiring search request information input by a user and sent by terminal equipment, and acquiring search keywords according to the search request information;
and matching the search keyword and the updated identification information of the first video, and determining that the first video is a search result when the search keyword is matched with the updated identification information of the first video, wherein the updated identification information of the first video at least comprises identification information of a part of second video, and the first video is associated with the second video and has the same video content segment as the second video.
Optionally, the obtaining search request information sent by the terminal device and input by the user, and obtaining the search keyword according to the search request information specifically include:
acquiring search request information input by a user and sent by terminal equipment, wherein the search request information carries text request information, and performing word segmentation processing according to the text request information to extract search keywords; alternatively, the first and second electrodes may be,
the method comprises the steps of obtaining search request information input by a user and sent by terminal equipment, converting the voice request information into character request information, and performing word segmentation processing according to the character request information to extract search keywords, wherein the search request information carries the voice request information.
Optionally, after determining that the first video is a search result, the method further includes:
and sending a search result comprising the playing information of the first video to the terminal equipment.
In a third aspect, an embodiment of the present disclosure provides an apparatus for processing identification information of a video, including: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring the identification information of the second video;
and the processing module is used for updating the identification information of the first video according to the identification information of the second video, wherein the updated identification information of the first video at least comprises the identification information of part of the second video, and the first video and the second video have the same video content segment.
Optionally, the processing module is specifically configured to compare the identification information of the first video with the identification information of the second video to obtain first difference identification information, where the first difference identification information is identification information that is included in the identification information of the second video and is not included in the identification information of the first video; and adding the first difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Optionally, the processing module is further configured to confirm that the first video and the second video have the same video content segment before the obtaining module obtains the identification information of the second video.
Optionally, the obtaining module is further configured to obtain identification information of the third video;
and the processing module is further configured to update the identification information of the first video according to the identification information of the third video, where the updated identification information of the first video at least includes identification information of a part of the third video, and the third video and the second video have the same video content segment.
Optionally, the processing module is specifically configured to compare the identification information of the first video with the identification information of the third video to obtain second difference identification information, where the second difference identification information is identification information that is included in the identification information of the third video and is not included in the identification information of the first video; and adding the second difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Optionally, the identification information is any one or a combination of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
Optionally, the same video content segment is a video content segment whose similarity value exceeds a preset threshold.
In a fourth aspect, an embodiment of the present disclosure provides a video search apparatus, including: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring search request information input by a user and sent by the terminal equipment and acquiring search keywords according to the search request information;
and the processing module is used for matching the search keyword with the updated identification information of the first video, and determining that the first video is a search result when the search keyword is matched with the updated identification information of the first video, wherein the updated identification information of the first video at least comprises identification information of a part of second video, the first video is associated with the second video, and the first video and the second video have the same video content segment.
Optionally, the processing module is specifically configured to obtain search request information input by a user and sent by the terminal device, where the search request information carries text request information, and perform word segmentation processing according to the text request information to extract search keywords; or acquiring search request information input by a user and sent by the terminal equipment, wherein the search request information carries voice request information, converting the voice request information into character request information, and performing word segmentation processing according to the character request information to extract search keywords.
Optionally, the method further includes: a sending module;
and the sending module is used for sending the search result comprising the playing information of the first video to the terminal equipment after the processing module determines that the first video is the search result.
In a fifth aspect, an embodiment of the present disclosure further provides a server, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the identification information processing method of the video according to any one of the first aspect of the embodiments of the present disclosure.
In a sixth aspect, an embodiment of the present disclosure further provides a server, including:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the video search method according to any one of the second aspect of the embodiments of the present disclosure.
In a seventh aspect, the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for processing identification information of a video according to any one of the first aspect of the present disclosure, and/or implements the method for searching a video according to any one of the second aspect of the present disclosure.
The embodiment of the disclosure can acquire the identification information of the second video, and update the identification information of the first video by using the identification information of the second video, so that the updated identification information of the first video at least comprises part of the identification information of the second video. Because the first video and the second video have the same video content segment, the technical scheme disclosed by the embodiment of the disclosure can share the identification information of the videos with the same video content segment, so that the videos required by the user can be accurately searched out during video searching.
Drawings
Fig. 1 is a schematic flowchart of a method for processing identification information of a video according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another method for processing identification information of a video according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a video search method provided by an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an identification information processing apparatus for a video according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a video search apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another video search apparatus provided in the embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a server provided in the embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of another server provided in the embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the disclosure and are not limiting of the disclosure. It should be further noted that, for the convenience of description, only some of the structures relevant to the present disclosure are shown in the drawings, not all of them.
It should be noted that the terms "system" and "network" are often used interchangeably in this disclosure. Reference to "and/or" in embodiments of the present disclosure is intended to "include any and all combinations of one or more of the associated listed items. The terms "first," "second," "third," and the like in the description and claims of the present disclosure and in the drawings are used for distinguishing between different objects and not for limiting a particular order.
It should also be noted that the following embodiments of the present disclosure may be implemented individually, or may be implemented in combination with each other, and the embodiments of the present disclosure are not limited specifically.
The embodiment of the disclosure provides a video identification information processing method and a video searching method, device and server, which can share identification information of videos with the same video content segment, so that the videos required by a user can be accurately searched out during video searching.
The embodiment of the disclosure is applicable to application scenarios of video search. The scenario may include: the video searching method comprises the steps of searching and/or playing videos through terminal equipment (such as a smart television, a mobile phone and the like), a server, or the terminal equipment and the server. The server may include a server for providing support for the terminal device. The server may also store video assets and/or store video associations. Specifically, the server may be composed of one server; or a plurality of servers, each of which plays a different role. For example, the server may include a storage server storing video resources, and/or an association relation server storing video association relations, which is not particularly limited by the embodiments of the present disclosure. Furthermore, in the technical solution related to the embodiment of the present disclosure, a search database may be disposed in the server, and the search database may store the identification information of each video, which is also only an example and is not a limitation to the technical solution.
Fig. 1 is a schematic flowchart of a method for processing identification information of a video according to an embodiment of the present disclosure, which is a method executed on a server side, and as shown in fig. 1, the method specifically includes the following steps:
s100, the server confirms that the first video and the second video have the same video content segment.
The same video content segment may be a video content segment with a similarity value exceeding a preset threshold.
It should be noted that the first video and the second video having the same video content segment means that a video content segment with a certain duration included in the first video is also included in the second video, for example, the first video and the second video are both segments of a certain television episode, the first video includes a segment of video content of a certain episode, and the second video includes all video content of the episode; or, the first video and the second video are both segments of a certain tv drama, the first video includes a segment of video content of an mth drama and a segment of video content of an nth drama, and the second video includes all video content of the mth drama, which is not limited in this embodiment as long as both include the same segment of video content.
In addition, when the first video and the second video have the same video content segment, there is an association relationship between the first video and the second video. Namely, the server stores the association relationship between the first video and the second video. Therefore, the method for the server to confirm that the first video and the second video have the same video content segment may be: the server sends a query request to the association relation server so that the association relation server queries whether the first video and the second video have the association relation or not, and feeds back a query result to the server.
The first video and the second video have an association relationship, and the association relationship can be established in an association relationship server, or formed on other equipment or through manual operation input. Specifically, the association relationship between the first video and the second video is obtained by any one or more of the following possible implementation manners:
firstly, the association relationship comprises that two videos with the same video content segment are respectively used as a first video and a second video according to the video content analysis results of at least two videos so as to obtain the association relationship between the first video and the second video; specifically, the method may be that a video content analysis device is separately provided, the video content analysis device may be capable of analyzing video content of at least two videos, and acquiring two videos having the same video content segment as a first video and a second video, respectively, to acquire an association relationship between the first video and the second video, and after acquiring the association relationship between the first video and the second video, the video content analysis device may send the association relationship to an association relationship server, so as to acquire the association relationship on the association relationship server.
Illustratively, taking the example that includes the first video, the second video and the third video, the video content analysis device analyzes the video content of the first video, the second video and the third video to obtain that the first video and the second video have the same video content segment, the first video and the third video do not have the same video content segment, and the second video and the third video have the same video content segment, so that the video content analysis device establishes and stores the association relationship between the first video and the second video, and the association relationship between the second video and the third video.
Optionally, the analyzing the at least two video contents to obtain the same video content segment may be specifically performed by performing frame extraction on a video to be analyzed to obtain a plurality of frame images, then extracting a plurality of types of image features of each frame image, where the type of the image features is not limited, and at this time, a plurality of image features capable of representing image features may be obtained; then, determining the video features of the video to be analyzed according to each image feature of the same type of the plurality of frame images of the video to be analyzed, for example, the plurality of image features can be arranged according to the sequence of the corresponding frame images in the video to obtain the video features, so that the plurality of types of video features can be obtained; and finally, performing sequence comparison on the videos to be analyzed according to the obtained various video characteristics to obtain the similarity of the videos to be analyzed, specifically setting a threshold value for the similarity, and considering that the two videos participating in the analysis have the same video content segment only when the similarity is greater than the preset threshold value.
Secondly, the incidence relation can be obtained by performing video segmentation on the second video according to a video segmentation technology to obtain the incidence relation between the first video and the second video. That is, in this implementation manner, the video splitting device may be configured to split the video, for example, the second video is split to obtain the first video, because the first video is a part of the second video, the first video and the second video inevitably have the same video content segment, at this time, the association relationship between the first video and the second video may be established during the splitting, and the video splitting device may send the association relationship to the association relationship server.
For example, taking the second video as movie a as an example, after all video contents of movie a are acquired, in order to meet different playing requirements of a user, video segmentation may be performed on a part of highlight video portions of the second video by using a video segmentation device, so as to obtain the first video. Therefore, the first video and the second video have the same video content segment, and the association relationship between the first video and the second video can be directly acquired during segmentation and sent to the association relationship server.
Thirdly, the association relationship may be an association relationship between a first video and a second video input by a first video uploading user, that is, when the video uploading user uploads the video, the association relationship between the uploaded video and the existing video resource may be input at the same time. For example, when a copyright owner of movie a promotes movie a, a part of a highlight is often selected for promotion, and the highlight is also a part of a video content segment of the entire movie a. When the propaganda promotion video is uploaded by the copyright side of movie a, the association relationship between the propaganda promotion video and movie a can be input.
Fourthly, the association relationship may be obtained through a video association relationship list, specifically, the association relationship list may be formed on other devices, for example, formed and stored in the above-mentioned video segmentation device; or, the association relation list may be an association relation list drawn manually after a large number of videos are watched manually and video contents are known; alternatively, the association list may be obtained directly by the partner and provided to the association server.
S101, the server acquires identification information of the second video.
Specifically, after confirming that the first video and the second video have the same video content segment, the server may obtain the identification information of the second video from the search database.
The identification information may be any one or a combination of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
S102, the server updates the identification information of the first video according to the identification information of the second video, wherein the updated identification information of the first video at least comprises part of the identification information of the second video.
Specifically, the method for updating the identification information of the first video by the server according to the identification information of the second video may be: the server compares the identification information of the first video with the identification information of the second video to obtain first difference identification information, wherein the first difference identification information is identification information which is included in the identification information of the second video and is not included in the identification information of the first video; subsequently, the server adds the first difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Illustratively, table 1 is a list of identification information of a first video before update provided by an embodiment of the present disclosure. As can be seen from table 1, the first video corresponds to three identification information.
TABLE 1
Figure BDA0001883342670000111
Table 2 is a list of identification information of the second video before updating according to the embodiment of the present disclosure. Wherein the first video and the second video have the same video content segment. As can be seen from table 2, the second video corresponds to four identification information.
TABLE 2
Figure BDA0001883342670000112
As can be seen from table 1, the three pieces of identification information corresponding to the first video are "drama a", "actor B", and "actor C", respectively. As can be seen from table 2, the four identification information corresponding to the second video are "tv show a", "director D", "drama E", and "XXXX year X month shooting", respectively.
Then, the server compares the identification information of the first video and the identification information of the second video, and the first difference identification information is any one or a combination of a plurality of "director D", "drama E", and "XXXX year X month shooting". Subsequently, the server may add the first difference identification information to the identification information of the first video, resulting in updated identification information of the first video. Table 3 is an updated identification information list of the first video according to the embodiment of the present disclosure.
TABLE 3
Figure BDA0001883342670000121
Optionally, the server may further update the identification information of the second video according to the identification information of the first video, where the updated identification information of the second video at least includes the identification information of a part of the first video.
Taking the identification information lists shown in table 1 and table 2 as an example, table 4 is an updated identification information list of the second video provided by the embodiment of the present disclosure.
TABLE 4
Figure BDA0001883342670000122
Therefore, the identification information of the videos with the same video content segment can be shared, so that the videos required by the user can be accurately searched during video searching.
On the basis of the foregoing embodiment of the present disclosure, fig. 2 is a schematic flowchart of another video identification information processing method provided in the foregoing embodiment of the present disclosure, and as shown in fig. 2, in addition to steps S100 to S102 in the foregoing embodiment, the method further includes:
s103, the server confirms that the second video and the third video have the same video content segment.
Specifically, the method for the server to confirm that the second video and the third video have the same video content segment is similar to the method for the server to confirm that the first video and the second video have the same video content segment in step S101, and for brevity, no further description is given here.
S104, the server acquires the identification information of the third video.
The identification information may be any one or a combination of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
S105, the server updates the identification information of the first video according to the identification information of the third video, wherein the updated identification information of the first video at least comprises the identification information of part of the third video.
Specifically, the method for updating the identification information of the first video by the server according to the identification information of the third video may be: the server compares the identification information of the first video with the identification information of the third video to obtain second difference identification information, wherein the second difference identification information is identification information which is included in the identification information of the third video and is not included in the identification information of the first video; subsequently, the server adds the second difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Illustratively, table 5 is an identification information list of a third video provided by the embodiment of the present disclosure. Wherein the second video and the third video have the same video content segment. As can be seen from table 5, the third video corresponds to two identification information.
TABLE 5
Figure BDA0001883342670000131
As can be seen from table 3, the six identification information corresponding to the first video that has been updated for the first time are "television show a", "actor B", "actor C", "director D", "screenplay E", and "XXXX year X month shooting", respectively. As can be seen from table 5, the two identification information corresponding to the third video are "drama a" and "hong kong china", respectively.
Then, the server compares the identification information of the first video and the identification information of the third video, which are updated for the first time, and the second difference identification information is "hong kong china". Subsequently, the server may add the second difference identification information to the identification information of the first video that has been updated for the first time, so as to obtain the identification information of the first video that has been updated for the second time. Table 6 is a list of identification information of the second updated first video according to the embodiment of the present disclosure.
TABLE 6
Figure BDA0001883342670000141
In the embodiment of the disclosure, the server can obtain the identification information of the second video, and update the identification information of the first video by using the identification information of the second video, so that the updated identification information of the first video at least includes part of the identification information of the second video. Because the first video and the second video have the same video content segment, the technical scheme disclosed by the embodiment of the disclosure can share the identification information of the videos with the same video content segment, so that the videos required by the user can be accurately searched out during video searching.
Fig. 3 is a schematic flowchart of a video search method provided in an embodiment of the present disclosure, and is a method executed on a server side, where the method is executed based on the above-mentioned video identification information processing method, as shown in fig. 3, the method specifically includes the following steps:
s201, the server obtains search request information input by a user and sent by the terminal device, and obtains search keywords according to the search request information.
It is understood that the search request information input by the user in this step may be input by the user through the terminal device or a client device loaded in the terminal device. It can be understood that the terminal device in this step may be a smart phone, or may be any terminal device having a playing and/or displaying function, such as a notebook computer or a tablet computer, or a terminal device capable of controlling other devices to play and/or search videos.
Specifically, the method for acquiring the search request information sent by the terminal device and input by the user through the server, and acquiring the search keyword according to the search request information may also include at least any one of the following two scenarios:
the method comprises the steps that firstly, a server obtains search request information input by a user and sent by a terminal device, the search request information carries word request information, and word segmentation processing is carried out according to the word request information to extract search keywords.
The scenario is suitable for the case where a user inputs search request information through a client device text.
And secondly, the server acquires search request information input by a user and sent by the terminal equipment, the search request information carries voice request information, the voice request information is converted into character request information, and word segmentation processing is carried out according to the character request information to extract search keywords.
This scenario is applicable to situations where a user inputs search request information via a client device voice.
In the two scenarios, the server performs word segmentation processing according to the text request information to extract the search keyword: the server divides the request information into a single word, i.e. the process of recombining continuous word sequences into word sequences according to a certain standard.
Illustratively, the server receives "i want to see recently showing movie a" that the user has literally input through the client device, and performs a word segmentation process on "i want to see recently showing movie a" to extract the search keyword "movie a". As another example, the server receives a "find drama B" input by the user through the voice of the client device, the server first converts the voice-type request information into text-type request information, and performs a word segmentation process on the "find drama B" to extract a search keyword "drama B".
After the server obtains the search keyword, the video requested to be searched by the user can be made clear, the search time can be greatly shortened, and the search efficiency is improved.
S202, the server matches the search keyword with the updated identification information of the first video, and determines that the first video is a search result when the search keyword matches the updated identification information of the first video, wherein the updated identification information of the first video at least comprises identification information of a part of second videos, and the first video is associated with the second videos, and the first video and the second videos have the same video content segment.
Specifically, after the server obtains the search keyword, the server matches the search keyword with the identification information of each video in the search database, and when the search keyword is matched with the updated identification information of the first video, the server determines that the first video is a search result when the search keyword is matched with the updated identification information of the first video. At this time, since the updated identification information of the first video includes both the identification information of the first video and the identification information of a part of the second video, the identification information of the updated identification information of the first video is more comprehensive, and the video required by the user can be accurately searched.
S203, the server sends a search result comprising the playing information of the first video to the terminal equipment.
Specifically, the server sends the search result including the playing information of the first video to the terminal device, so that the terminal device generates a search result page according to the search result including the playing information of the first video, and the terminal device can play the first video by clicking the first video in the search result page by the user.
In the embodiment of the disclosure, the server can acquire the search keyword, match the search keyword with the updated identification information of the first video, and determine that the first video is a search result when the search keyword is matched with the updated identification information of the first video. Because the updated identification information of the first video comprises the identification information of the first video and the identification information of part of the second video, the identification information of the updated identification information of the first video is more comprehensive, and the video required by the user can be accurately searched.
Fig. 4 is a schematic structural diagram of a video id information processing apparatus according to an embodiment of the present disclosure, and in particular, the video id information processing apparatus may be configured in a server, and includes: an acquisition module 10 and a processing module 11.
An obtaining module 10, configured to obtain identification information of a second video;
and the processing module 11 is configured to update the identification information of the first video according to the identification information of the second video, where the updated identification information of the first video at least includes identification information of a part of the second video, and the first video and the second video have the same video content segment.
Optionally, the processing module 11 is specifically configured to compare the identification information of the first video with the identification information of the second video to obtain first difference identification information, where the first difference identification information is identification information that is included in the identification information of the second video and is not included in the identification information of the first video; and adding the first difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Optionally, the processing module 11 is further configured to confirm that the first video and the second video have the same video content segment before the obtaining module obtains the identification information of the second video.
Optionally, the obtaining module 10 is further configured to obtain identification information of a third video;
the processing module 11 is further configured to update the identification information of the first video according to the identification information of the third video, where the updated identification information of the first video at least includes identification information of a part of the third video, and the third video and the second video have the same video content segment.
Optionally, the processing module 11 is specifically configured to compare the identification information of the first video with the identification information of the third video to obtain second difference identification information, where the second difference identification information is identification information that is included in the identification information of the third video and is not included in the identification information of the first video; and adding the second difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
Optionally, the identification information is any one or a combination of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
Optionally, the same video content segment is a video content segment whose similarity value exceeds a preset threshold.
The identification information processing device for the video provided by the embodiment of the disclosure can execute the steps executed by the server in the identification information processing method for the video provided by the embodiment of the disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 5 is a schematic structural diagram of a video search apparatus according to an embodiment of the present disclosure, and specifically, the video search apparatus may be configured in a server, and includes: an acquisition module 20 and a processing module 21.
An obtaining module 20, configured to obtain search request information input by a user and sent by a terminal device, and obtain a search keyword according to the search request information;
the processing module 21 is configured to match the search keyword with the updated identification information of the first video, and determine that the first video is a search result when the search keyword matches with the updated identification information of the first video, where the updated identification information of the first video at least includes identification information of a part of second videos, and the first video is associated with the second video, and the first video and the second video have the same video content segment.
Optionally, the processing module 21 is specifically configured to obtain search request information input by a user and sent by the terminal device, where the search request information carries text request information, and perform word segmentation processing according to the text request information to extract a search keyword; or acquiring search request information input by a user and sent by the terminal equipment, wherein the search request information carries voice request information, converting the voice request information into character request information, and performing word segmentation processing according to the character request information to extract search keywords.
Optionally, with reference to fig. 5, fig. 6 is a schematic structural diagram of another video search apparatus provided in the embodiment of the present disclosure, further including: a sending module 22.
And a sending module 22, configured to send, to the terminal device, a search result including the play information of the first video after the processing module 21 determines that the first video is the search result.
The above video search device provided by the embodiment of the present disclosure can execute the steps executed by the server in the video search method provided by the embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
Fig. 7 is a schematic structural diagram of a server according to an embodiment of the present disclosure, and as shown in fig. 7, the server includes a processor 30, a memory 31, an input device 32, and an output device 33; the number of the processors 30 in the server may be one or more, and one processor 30 is taken as an example in fig. 7; the processor 30, the memory 31, the input device 32 and the output device 33 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 7. A bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
The memory 31 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the identification information processing method of video in the embodiment of the present disclosure. The processor 30 executes various functional applications of the server and data processing, that is, implements the above-described identification information processing method of the video, by executing software programs, instructions, and modules stored in the memory 31.
The memory 31 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 31 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 31 may further include memory located remotely from processor 30, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 32 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server. The output device 33 may include a display device such as a display screen.
Fig. 8 is a schematic structural diagram of another server provided in the embodiment of the present disclosure, and as shown in fig. 8, the server includes a processor 40, a memory 41, an input device 42, and an output device 43; the number of the processors 40 in the server may be one or more, and one processor 40 is taken as an example in fig. 8; the processor 40, the memory 41, the input device 42 and the output device 43 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 8. A bus represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
The memory 41 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the video search method in the embodiments of the present disclosure. The processor 40 executes various functional applications of the server and data processing by running software programs, instructions, and modules stored in the memory 41, that is, implements the video search method described above.
The memory 41 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 41 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 41 may further include memory located remotely from processor 40, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 42 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server. The output device 43 may include a display device such as a display screen.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such embodiments, the computer program may be downloaded and installed from a network, or installed from memory. The computer program, when executed by a processor, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. The name of the module does not in some cases constitute a limitation to the module itself, and for example, the acquisition module 10 may also be described as a "module that acquires identification information of the second video".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (15)

1. A method for processing identification information of a video, comprising:
acquiring identification information of a second video;
and updating the identification information of the first video according to the identification information of the second video, wherein the updated identification information of the first video at least comprises the identification information of part of the second video, and the first video and the second video have the same video content segment.
2. The method according to claim 1, wherein the updating the identification information of the first video according to the identification information of the second video specifically comprises:
comparing the identification information of the first video with the identification information of the second video to obtain first difference identification information, wherein the first difference identification information is identification information which is included in the identification information of the second video and is not included in the identification information of the first video;
and adding the first difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
3. The method of claim 1, further comprising, before obtaining the identification information of the second video:
confirming that the first video and the second video have the same video content segment.
4. The method of claim 1, further comprising:
acquiring identification information of a third video;
and updating the identification information of the first video according to the identification information of the third video, wherein the updated identification information of the first video at least comprises the identification information of part of the third video, and the third video and the second video have the same video content segment.
5. The method according to claim 4, wherein the updating the identification information of the first video according to the identification information of the third video specifically comprises:
comparing the identification information of the first video with the identification information of the third video to obtain second difference identification information, wherein the second difference identification information is identification information which is included in the identification information of the third video and is not included in the identification information of the first video;
and adding the second difference identification information to the identification information of the first video to obtain the updated identification information of the first video.
6. The method according to claim 1, wherein the identification information is any one or more of a video name, a director name, a drama name, an actor name, a showing date, a shooting date, and a country.
7. The method according to claim 1, wherein the identical video content segments are video content segments with similarity values exceeding a preset threshold.
8. A video search method, comprising:
acquiring search request information input by a user and sent by terminal equipment, and acquiring search keywords according to the search request information;
matching the search keyword and the updated identification information of the first video, and determining that the first video is a search result when the search keyword is matched with the updated identification information of the first video, wherein the updated identification information of the first video at least comprises part of identification information of the second video, the first video is associated with the second video, and the first video and the second video have the same video content segment.
9. The method according to claim 8, wherein the obtaining search request information sent by the terminal device and input by the user, and obtaining the search keyword according to the search request information specifically include:
acquiring search request information input by a user and sent by terminal equipment, wherein the search request information carries text request information, and performing word segmentation processing according to the text request information to extract the search keyword; alternatively, the first and second electrodes may be,
the method comprises the steps of obtaining search request information input by a user and sent by terminal equipment, converting the voice request information into character request information, and performing word segmentation processing according to the character request information to extract search keywords, wherein the search request information carries the voice request information.
10. The method of claim 8, after determining that the first video is a search result, further comprising:
and sending a search result comprising the playing information of the first video to the terminal equipment.
11. An identification information processing apparatus for a video, comprising: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring the identification information of the second video;
the processing module is configured to update the identification information of the first video according to the identification information of the second video, where the updated identification information of the first video at least includes part of the identification information of the second video, and the first video and the second video have the same video content segment.
12. A video search apparatus, comprising: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring search request information input by a user and sent by terminal equipment and acquiring search keywords according to the search request information;
the processing module is configured to match the search keyword with the updated identification information of the first video, and determine that the first video is a search result when the search keyword matches the updated identification information of the first video, where the updated identification information of the first video at least includes identification information of a part of second videos, and the first video is associated with the second video, and the first video and the second video have the same video content segment.
13. A server, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method of processing identification information for video according to any one of claims 1 to 7.
14. A server, comprising:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the video search method of any of claims 8-10.
15. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out a method for processing identification information of a video according to any one of claims 1 to 9 and/or a method for searching a video according to any one of claims 8 to 10.
CN201811434120.7A 2018-11-28 2018-11-28 Video identification information processing method and video searching method, device and server Pending CN111241341A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811434120.7A CN111241341A (en) 2018-11-28 2018-11-28 Video identification information processing method and video searching method, device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811434120.7A CN111241341A (en) 2018-11-28 2018-11-28 Video identification information processing method and video searching method, device and server

Publications (1)

Publication Number Publication Date
CN111241341A true CN111241341A (en) 2020-06-05

Family

ID=70872164

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811434120.7A Pending CN111241341A (en) 2018-11-28 2018-11-28 Video identification information processing method and video searching method, device and server

Country Status (1)

Country Link
CN (1) CN111241341A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423098A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video processing method, electronic device and storage medium
CN113360710A (en) * 2021-05-27 2021-09-07 北京奇艺世纪科技有限公司 Method and device for determining combination degree between objects, computer equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686452A (en) * 2013-12-06 2014-03-26 北京普瑞众合国际科技有限公司 Addition processing method for video associated information
CN105141569A (en) * 2014-05-30 2015-12-09 华为技术有限公司 Media processing method and device
CN107704525A (en) * 2017-09-04 2018-02-16 优酷网络技术(北京)有限公司 Video searching method and device
US10057833B2 (en) * 2013-03-14 2018-08-21 T-Mobile Usa, Inc. System and method for optimizing a media gateway selection in mobile switching center pool architecture

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10057833B2 (en) * 2013-03-14 2018-08-21 T-Mobile Usa, Inc. System and method for optimizing a media gateway selection in mobile switching center pool architecture
CN103686452A (en) * 2013-12-06 2014-03-26 北京普瑞众合国际科技有限公司 Addition processing method for video associated information
CN105141569A (en) * 2014-05-30 2015-12-09 华为技术有限公司 Media processing method and device
CN107704525A (en) * 2017-09-04 2018-02-16 优酷网络技术(北京)有限公司 Video searching method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112423098A (en) * 2020-11-20 2021-02-26 北京字节跳动网络技术有限公司 Video processing method, electronic device and storage medium
WO2022105898A1 (en) * 2020-11-20 2022-05-27 北京字节跳动网络技术有限公司 Video processing method, electronic apparatus, and storage medium
CN113360710A (en) * 2021-05-27 2021-09-07 北京奇艺世纪科技有限公司 Method and device for determining combination degree between objects, computer equipment and storage medium
CN113360710B (en) * 2021-05-27 2023-09-01 北京奇艺世纪科技有限公司 Method and device for determining combination degree between objects, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110134829B (en) Video positioning method and device, storage medium and electronic device
CN109815261B (en) Global search function implementation and data real-time synchronization method and device and electronic equipment
US20180159928A1 (en) Network-Based File Cloud Synchronization Method
US20200082814A1 (en) Method and apparatus for operating smart terminal
CN109271556B (en) Method and apparatus for outputting information
US20190188329A1 (en) Method and device for generating briefing
US20190327105A1 (en) Method and apparatus for pushing information
US11800201B2 (en) Method and apparatus for outputting information
CN109600625B (en) Program searching method, device, equipment and medium
CN108197336B (en) Video searching method and device
CN111753673A (en) Video data detection method and device
US20170142454A1 (en) Third-party video pushing method and system
CN106407268B (en) Content retrieval method and system based on coverage optimization method
CN108038172B (en) Search method and device based on artificial intelligence
CN104853251A (en) Online collection method and device for multimedia data
CN111241341A (en) Video identification information processing method and video searching method, device and server
CN110442844B (en) Data processing method, device, electronic equipment and storage medium
CN109063200B (en) Resource searching method and device, electronic equipment and computer readable medium
CN109241344B (en) Method and apparatus for processing information
CN111246254A (en) Video recommendation method and device, server, terminal equipment and storage medium
US20140010521A1 (en) Video processing system, video processing method, video processing apparatus, control method of the apparatus, and storage medium storing control program of the apparatus
CN111274449A (en) Video playing method and device, electronic equipment and storage medium
CN111147905A (en) Media resource searching method, television, storage medium and device
CN110413603B (en) Method and device for determining repeated data, electronic equipment and computer storage medium
CN110035298B (en) Media quick playing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination