CN110674345A - Video searching method and device and server - Google Patents

Video searching method and device and server Download PDF

Info

Publication number
CN110674345A
CN110674345A CN201910865161.XA CN201910865161A CN110674345A CN 110674345 A CN110674345 A CN 110674345A CN 201910865161 A CN201910865161 A CN 201910865161A CN 110674345 A CN110674345 A CN 110674345A
Authority
CN
China
Prior art keywords
video
target
user
scene
timestamp information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910865161.XA
Other languages
Chinese (zh)
Inventor
毕震坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN201910865161.XA priority Critical patent/CN110674345A/en
Publication of CN110674345A publication Critical patent/CN110674345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/735Filtering based on additional data, e.g. user or group profiles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Library & Information Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Television Signal Processing For Recording (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the invention provides a video searching method, a video searching device and a server, wherein the method is applied to the server and comprises the following steps: acquiring a keyword input by a user; when the key words contain preset condition key words, searching videos containing first target scene labels matched with the key words according to the attribute information of each video to serve as first target videos; obtaining first target timestamp information of a card segment corresponding to a first target scene label in a first target video; and determining the identification information of the first target video and the first target timestamp information as the search result. Because the videos are subjected to scene identification in advance and the identification results are counted, the attribute information of each video is obtained, the attribute information comprises the scene labels of each scene of each video and the timestamp information of the corresponding card segment of each scene label in the video, and the video with the video content most matched with the keywords can be found out according to the matching degree of the keywords and the scene labels.

Description

Video searching method and device and server
Technical Field
The invention relates to the technical field of multimedia, in particular to a video searching method, a video searching device and a server.
Background
With the rapid development of internet technology, watching videos by using intelligent terminals such as mobile phones and tablet computers has become a popular entertainment mode for the public. The user can input the keywords of the videos which the user wants to watch in the search bar of the video playing platform and select the videos which the user is interested in from the returned video search results to watch.
However, in some application scenarios, a user may only want to view a video or video card segment containing a particular piece of content. In the related art, the video playing platform searches according to the degree of correlation between the keywords and the video titles, and the content of the videos in the search result may not contain the specific content that the user wants to watch.
It can be seen that, in the related art, it is very likely that a video search cannot be performed according to the degree of correlation between the keywords input by the user and the video content.
Disclosure of Invention
The embodiment of the invention aims to provide a video searching method, a video searching device and a video searching server, so that video searching can be carried out according to the correlation degree of keywords input by a user and video contents. The specific technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a video search method, applied to a server, including:
acquiring a keyword input by a user;
when the keywords input by the user comprise preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of each video, and taking the videos as first target videos; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
obtaining first target timestamp information of a card segment corresponding to the first target scene label in the first target video;
and determining the identification information of the first target video and the first target timestamp information as a search result.
Optionally, the method further includes:
when the keywords input by the user do not contain preset condition keywords, searching a video containing a video title matched with the keywords to serve as a second target video;
judging whether each scene label of each second target video comprises a second target scene label matched with the user label; if not, taking the identification information of the second target video as a search result; the user tags are obtained in advance according to scene tag statistics of videos watched by the user historically;
if so, obtaining second target timestamp information of a card segment corresponding to the second target scene label in the second target video;
and determining the identification information of the second target video and the second target timestamp information as the search result.
Optionally, if there is a second target scene tag respectively matched with the multiple user tags, the step of determining the identification information of the second target video and the second target timestamp information as the search result includes:
determining a second target video and the second target timestamp information as search results according to the weights of the user tags and arranging the search results in a descending order; the weight of the user label is obtained by calculating according to the scene label statistical result of the video watched by the user in history.
Optionally, after determining the search result, the method further includes:
displaying the search result in the current page;
detecting whether a user clicks first target timestamp information, a first target video, second target timestamp information or a second target video in the search result;
when detecting that a user clicks a first target video or a second target video, playing the first target video or the second target video;
and when detecting that the user clicks the first target timestamp information or the second target timestamp information, skipping to the first target video or the second target video, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information.
Optionally, the method further includes:
detecting whether a user clicks a preset intercepting key or not in the video playing process; if yes, taking the current frame when the user clicks the intercepting key as a starting frame;
when the intercepting key is detected to be clicked again by the user, taking the current frame when the intercepting key is clicked again by the user as a termination frame;
intercepting video data between the starting frame and the ending frame from a playing video to obtain a video card segment to be marked, and taking timestamps of the starting frame and the ending frame as timestamp information of the video card segment to be marked;
receiving a scene label of a video card segment to be marked input by a user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
In a second aspect, an embodiment of the present invention provides a video search apparatus, which is applied to a server, and the apparatus includes:
the keyword acquisition module is used for acquiring keywords input by a user;
the first search module is used for searching videos containing a first target scene label matched with the keywords according to the attribute information of each video when the keywords input by the user contain preset condition keywords, and taking the videos as first target videos; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
a first obtaining module, configured to obtain first target timestamp information of a card segment corresponding to the first target scene tag in the first target video;
a first determining module, configured to determine, as a search result, the identification information of the first target video and the first target timestamp information.
Optionally, the apparatus further comprises:
the second searching module is used for searching a video containing a video title matched with the keyword as a second target video when the keyword input by the user does not contain a preset condition keyword;
the tag judgment module is used for judging whether each scene tag of each second target video comprises a second target scene tag matched with the user tag; if not, taking the identification information of the second target video as a search result; the user tags are obtained in advance according to scene tag statistics of videos watched by the user historically;
a second obtaining module, configured to obtain second target timestamp information of a card segment corresponding to each second target scene tag in each second target video if each scene tag of each second target video includes a second target scene tag matched with a user tag;
and the second determining module is used for determining the identification information of the second target video and the second target timestamp information as the search result.
Optionally, if there are second target scene tags respectively matched with the plurality of user tags,
the second determining module is specifically configured to determine a second target video and the second target timestamp information as search results according to the weights of the plurality of user tags and arrange the search results in a descending order; the weight of the user label is obtained by calculating according to the scene label statistical result of the video watched by the user in history.
Optionally, the apparatus further comprises:
the device further comprises:
the display module is used for displaying the search result in the current page after the first determining module or the second determining module determines the search result;
the first detection module is used for detecting whether a user clicks the first target timestamp information, the first target video, the second target timestamp information or the second target video in the search result;
the first playing module is used for playing the first target video or the second target video when the fact that the user clicks the first target video or the second target video is detected;
and the second playing module is used for jumping to the first target video or the second target video when detecting that the user clicks the first target timestamp information or the second target timestamp information, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information.
Optionally, the apparatus further comprises:
the second detection module is used for detecting whether a user clicks a preset intercepting key or not in the video playing process; if yes, taking the current frame when the user clicks the intercepting key as a starting frame;
the third detection module is used for taking the current frame when the user clicks the interception key again as a termination frame when the user clicks the interception key again;
the intercepting module is used for intercepting video data between the starting frame and the ending frame from a playing video, obtaining a video card segment to be marked, and taking the time stamps of the starting frame and the ending frame as the time stamp information of the video card segment to be marked;
and the storage module is used for receiving a scene label of the video card segment to be marked input by the user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
In a third aspect, an embodiment of the present invention provides a server, which is characterized by including a processor, a communication interface, a memory, and a communication bus, where the processor and the communication interface complete communication between the memory and the processor through the communication bus;
a memory for storing a computer program;
a processor, configured to implement the steps of the video search method according to any one of the first aspect when executing the program stored in the memory.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the video search method according to any one of the first aspects.
In a fifth aspect, an embodiment of the present invention further provides a computer program product containing instructions, which when run on a computer, causes the computer to execute any of the video search methods described above.
According to the video searching method, the video searching device and the video searching server provided by the embodiment of the invention, the keywords input by the user are obtained; when the keywords input by the user comprise preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of each video, and taking the videos as first target videos; obtaining first target timestamp information of a card segment corresponding to the first target scene label in the first target video; and determining the identification information of the first target video and the first target timestamp information as a search result. Since the content of each video is subjected to scene identification in advance and the identification result is counted to obtain the attribute information of each video, and the attribute information comprises the scene tags of each scene of each video and the timestamp information of the corresponding card segment of each scene tag in the video, after the user inputs the keyword, the video with the video content most matched with the keyword can be found for the user according to the matching degree of the keyword and the scene tags.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a video search method according to an embodiment of the present invention;
fig. 2 is another schematic flow chart of a video search method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart illustrating the user-autonomous tagging of video card segments in the embodiment of FIG. 2;
FIG. 4 is a schematic structural diagram of a video search apparatus according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
In order to perform video search according to the degree of correlation between keywords input by a user and video content, the embodiment of the invention provides a video search method which is applied to a server. As shown in fig. 1, the method includes:
s101, obtaining keywords input by a user.
Specifically, the user may input a keyword in a search bar of the search page, where the input keyword may be a name of a movie or television work, a name of an art program, a name of an actor, a certain scene, or a certain line.
S102, when the keywords input by the user include preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of the videos, and taking the videos as first target videos; the attribute information is obtained by statistics after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video.
In this embodiment, the preset condition keyword may be "scene", "lines", or "articles", etc. For example, the user inputs "say a movie of 'i love you' lines" in the search bar, and since the keyword includes a preset condition keyword "movie", a scene search should be performed to determine the first target video.
S103, first target timestamp information of a card segment corresponding to the first target scene label in the first target video is obtained.
And S104, determining the identification information of the first target video and the first target timestamp information as a search result.
In this step, the identification information of the first target video may include: the duration of the first target video and the video cover picture; if the first target video is a television show or a comprehensive art, the identification information may further include related information such as the number of episodes, the lead actor, and the like.
Since the content of each video is subjected to scene identification in advance and the identification result is counted to obtain the attribute information of each video, and the attribute information comprises the scene tags of each scene of each video and the timestamp information of the corresponding card segment of each scene tag in the video, after the user inputs the keyword, the video with the video content most matched with the keyword can be found for the user according to the matching degree of the keyword and the scene tags.
Fig. 2 is another schematic flow chart of a video search method according to an embodiment of the present invention. The video searching method is applied to a server and comprises the following steps as shown in the figure:
s201, obtaining a keyword input by a user.
S202, judging whether the keywords input by the user contain preset condition keywords or not; if yes, go to step S203; if not, step S206 is performed.
In a possible implementation manner, after a keyword input by a user is obtained, whether the keyword contains a preset condition keyword is further judged; if yes, carrying out scene search; if not, a regular search is conducted.
S203, searching the video containing the first target scene label matched with the keyword according to the attribute information of each video, and taking the video as the first target video.
As in step S102 in the embodiment shown in fig. 1, the attribute information is obtained by performing scene identification on each video in advance and then performing statistics, where the attribute information includes a scene tag of each scene of each video and timestamp information of a card segment corresponding to each scene tag in the video.
Illustratively, the keyword input by the user is the name a of an actor, and if a scene tag also includes the name a of the actor, the scene tag is considered to match the keyword, and the video containing the scene tag is the first target video.
The attribute information of the video may be specifically as shown in table 1:
TABLE 1
Card segment identification Scene Line word Character Article with a cover Time stamp information
01 Gunfight Actor A Automobile 00:01:06—00:03:00
02 Embrace I love you Actor A and actor B 01:00:00—01:02:00
03 Party with a mobile phone Happy birthday Actor C Cake 01:21:03—01:30:08
As shown in the above table, the scene tag corresponding to each card segment may include at least one of the four elements of a scene, a speech, a person, and an article. The scene labels of the card segment 01 are 'gunfight', 'actor A' and 'car', and are played at 00:01: 06-00: 03:00 in the video; the scene labels of the card segment 02 are 'hug', 'I love you', 'actor A' and 'actor B', and are played at 01:00: 00-01: 02:00 in the video; the scene labels of the clip segment 03 are "party", "happy birthday", "actor C", and "cake", which are played in the video at 01:21: 03-01: 30: 08.
S204, first target timestamp information of a card segment corresponding to the first target scene label in the first target video is obtained.
S205, the identification information of the first target video and the first target timestamp information are determined as the search result. After that, step S211 is executed.
S206, searching for a video containing a video title matching the keyword as a second target video.
In this step, when the keywords input by the user do not include the preset condition keywords, the performed conventional search is to perform video search according to the matching degree of the keywords and the video titles.
Illustratively, the keyword input by the user is the name a of an actor, and if the name a of the actor is also included in the title of a video, the video can be regarded as a second target video matching the keyword.
S207, judging whether each scene label of each second target video comprises a second target scene label matched with the user label; if yes, go to step S208; if not, step S210 is performed.
Specifically, the user tags may be obtained in advance according to scene tag statistics of videos watched by the user historically.
S208, second target timestamp information of the card segment corresponding to the second target scene label in the second target video is obtained.
S209, the identification information of the second target video and the second target timestamp information are determined as the search result. After that, step S211 is executed.
In steps S207 to S209, after the second target video is determined by the conventional search, if each scene tag of each searched second target video includes a second target scene tag matching the user tag, a video card segment meeting the preference of the user may be further recommended to the user, so that the user' S stickiness can be enhanced while the user click rate is increased.
In a possible implementation manner, the scene tags of the videos historically watched by the user are counted in advance, after the user tags are obtained, the statistical result of the scene tags of the videos historically watched by the user is further calculated, and the weight of each user tag is obtained.
If second target scene tags respectively matched with the multiple user tags exist in the scene tags of the second target videos, the timestamp information of the second target videos and the video card segments corresponding to the scene tags matched with the user tags in the second target videos can be used as search results according to the weight of each user tag, and the search results are displayed in the current page in a descending order.
For example, the user labels of the user a are "gun battle" and "hug", the weights of the two user labels are 0.8 and 0.2, respectively, the scene label of the second target video a is "gun battle" and "tiger", and the scene label of the second target video B is "car flight" and "hug". Although the scene tags of the second target videos a and B have second target scene tags respectively corresponding to the user tags, the second target video a should be ranked before the second target video B when the search result is displayed because the weight of the user tag "gun fight" is greater than "hug".
It should be understood that the search results which are more interesting to the user are arranged at the front position, so that the time for the user to obtain the video which the user wants to watch in the search results can be reduced, and the user experience is improved.
And S210, taking the identification information of the second target video as a search result.
And when the scene tags of the second target videos do not contain the scene tags matched with the user tags, directly taking the identification information of the second target videos as the search result and displaying the search result on the current page.
And S211, displaying the search result in the current page.
S212, detecting whether the user clicks the first target timestamp information, the first target video, the second target timestamp information or the second target video in the search result.
S213, playing the video selected by the user from the search results.
And when the fact that the user clicks the first target video or the second target video is detected, correspondingly playing the first target video or the second target video.
And when detecting that the user clicks the first target timestamp information or the second target timestamp information, skipping to the first target video or the second target video, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information. In this way, the user can directly view the video card segment of interest without having to pull the progress bar of the video to find by himself.
Obviously, the video search method provided in the embodiment shown in fig. 2 can perform a conventional search or a scene search according to whether the keywords input by the user include preset condition keywords. On one hand, when the preset condition keywords are not contained, the conventional search is carried out, not only is the conventional search result displayed, but also card segments which are possibly interested by the user are recommended from the conventionally searched video according to the user tags, so that the user can quickly obtain the video content which is interested by the user from the returned massive videos, and the video recommendation method based on the user preference is realized; on the other hand, if the user has the specific video content to watch, the input keywords include the preset condition keywords, the timestamp information of the specific video content in the target video can be accurately provided for the user through scene search, the search result according with the preference of the user is returned, personalized video search service is provided for the user, and the user experience is further improved.
In addition, in the above embodiments, the video card segment and the corresponding scene tag may be determined after the server identifies the video content. In other embodiments, the user may intercept the video during playing, mark the scene tag, and store the scene tag locally or upload the scene tag to a server. The played video is not only the video searched by the user, but also the video selected to be played by the user from the video recommendation page.
The specific process is shown in fig. 3, and includes:
s301, in the video playing process, whether a user clicks a preset intercepting key is detected.
S302, when it is detected that the user clicks a preset intercepting key, taking the current frame when the user clicks the intercepting key as a starting frame.
And S303, when the preset intercepting key is detected to be clicked again by the user, taking the current frame when the intercepting key is clicked again by the user as a termination frame.
Specifically, the user may start video capture by clicking the up key and end video capture by clicking the up key again.
S304, intercepting video data between the starting frame and the ending frame from the playing video, obtaining a video card segment to be marked, and taking the time stamps of the starting frame and the ending frame as the time stamp information of the video card segment to be marked.
For example, if the timestamp of the start frame is 00:05:15 and the timestamp of the end frame is 00:06:10, then the corresponding timestamp information of the video card segment to be marked is 00:05: 15-00: 06: 10.
S305, receiving a scene label of the video card segment to be marked input by the user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
In a possible implementation manner, after saving a scene tag input by a user and timestamp information of a video card segment to be marked, a message for inquiring whether the user collects the card segment to be marked is generated, and after the user confirms, the card segment to be marked intercepted by the user is collected into an account of the user.
Therefore, the user can intercept favorite card segments at any time in the video playing process and mark scene labels for the card segments independently, so that the user can search and watch the favorite card segments conveniently at any time.
As shown in fig. 4, an embodiment of the present invention provides a video search apparatus, applied to a server, including:
a keyword obtaining module 410, configured to obtain a keyword input by a user;
a first search module 420, configured to, when a keyword input by a user includes a preset condition keyword, search, according to attribute information of each video, a video including a first target scene tag matching the keyword, and use the video as a first target video; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
a first obtaining module 430, configured to obtain first target timestamp information of a card segment corresponding to a first target scene tag in a first target video;
a first determining module 440, configured to determine the identification information of the first target video and the first target timestamp information as the search result.
According to the video searching device provided by the embodiment of the invention, the keywords input by the user are obtained; when the keywords input by the user comprise preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of each video, and taking the videos as first target videos; obtaining first target timestamp information of a card segment corresponding to the first target scene label in the first target video; and determining the identification information of the first target video and the first target timestamp information as a search result. Since the content of each video is subjected to scene identification in advance and the identification result is counted to obtain the attribute information of each video, and the attribute information comprises the scene tags of each scene of each video and the timestamp information of the corresponding card segment of each scene tag in the video, after the user inputs the keyword, the video with the video content most matched with the keyword can be found for the user according to the matching degree of the keyword and the scene tags.
In a possible implementation, the video search apparatus further includes:
the second searching module is used for searching a video containing a video title matched with the keyword as a second target video when the keyword input by the user does not contain a preset condition keyword;
the tag judgment module is used for judging whether each scene tag of each second target video comprises a second target scene tag matched with the user tag; if not, taking the identification information of the second target video as a search result; the user tags are obtained in advance according to scene tag statistics of videos watched by the user historically;
a second obtaining module, configured to obtain second target timestamp information of a card segment corresponding to each second target scene tag in each second target video if each scene tag of each second target video includes a second target scene tag matched with a user tag;
and the second determining module is used for determining the identification information of the second target video and the second target timestamp information as the search result.
In a possible implementation manner, if there are second target scene tags respectively matched with a plurality of user tags, the second determining module is specifically configured to determine a second target video and the second target timestamp information as search results according to the weights of the plurality of user tags and arrange the search results in a descending order; the weight of the user label is obtained by calculating according to the scene label statistical result of the video watched by the user in history.
In a possible implementation, the video search apparatus further includes:
the display module is used for displaying the search result in the current page after the first determining module or the second determining module determines the search result;
the first detection module is used for detecting whether a user clicks the first target timestamp information, the first target video, the second target timestamp information or the second target video in the search result;
the first playing module is used for playing the first target video or the second target video when the fact that the user clicks the first target video or the second target video is detected;
and the second playing module is used for jumping to the first target video or the second target video when detecting that the user clicks the first target timestamp information or the second target timestamp information, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information.
In a possible implementation, the video search apparatus further includes:
the second detection module is used for detecting whether a user clicks a preset intercepting key or not in the video playing process; if yes, taking the current frame when the user clicks the intercepting key as a starting frame;
the third detection module is used for taking the current frame when the user clicks the interception key again as a termination frame when the user clicks the interception key again;
the intercepting module is used for intercepting video data between the starting frame and the ending frame from a playing video, obtaining a video card segment to be marked, and taking the time stamps of the starting frame and the ending frame as the time stamp information of the video card segment to be marked;
and the storage module is used for receiving a scene label of the video card segment to be marked input by the user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
The embodiment of the present invention further provides a server, as shown in fig. 5, including a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501, when executing the program stored in the memory 503, implements the following steps:
acquiring a keyword input by a user;
when the keywords input by the user comprise preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of each video, and taking the videos as first target videos; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
obtaining first target timestamp information of a card segment corresponding to the first target scene label in the first target video;
and determining the identification information of the first target video and the first target timestamp information as a search result.
The communication bus mentioned in the above server may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the server and other devices.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program, when executed by a processor, implements the steps of any of the above video search methods.
In yet another embodiment, a computer program product containing instructions is also provided, which when run on a computer causes the computer to perform any of the video search methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (11)

1. A video search method is applied to a server, and the method comprises the following steps:
acquiring a keyword input by a user;
when the keywords input by the user comprise preset condition keywords, searching videos containing first target scene labels matched with the keywords according to attribute information of each video, and taking the videos as first target videos; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
obtaining first target timestamp information of a card segment corresponding to the first target scene label in the first target video;
and determining the identification information of the first target video and the first target timestamp information as a search result.
2. The method of claim 1, further comprising:
when the keywords input by the user do not contain preset condition keywords, searching a video containing a video title matched with the keywords to serve as a second target video;
judging whether each scene label of each second target video comprises a second target scene label matched with the user label; if not, taking the identification information of the second target video as a search result; the user tags are obtained in advance according to scene tag statistics of videos watched by the user historically;
if so, obtaining second target timestamp information of a card segment corresponding to the second target scene label in the second target video;
and determining the identification information of the second target video and the second target timestamp information as the search result.
3. The method according to claim 2, wherein if there is a second target scene tag respectively matching with the plurality of user tags, the step of determining identification information of the second target video and the second target timestamp information as the search result comprises:
determining a second target video and the second target timestamp information as search results according to the weights of the user tags and arranging the search results in a descending order; the weight of the user label is obtained by calculating according to the scene label statistical result of the video watched by the user in history.
4. The method of claim 3, wherein after determining search results, the method further comprises:
displaying the search result in the current page;
detecting whether a user clicks first target timestamp information, a first target video, second target timestamp information or a second target video in the search result;
when detecting that a user clicks a first target video or a second target video, playing the first target video or the second target video;
and when detecting that the user clicks the first target timestamp information or the second target timestamp information, skipping to the first target video or the second target video, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information.
5. The method of claim 1, further comprising:
detecting whether a user clicks a preset intercepting key or not in the video playing process; if yes, taking the current frame when the user clicks the intercepting key as a starting frame;
when the intercepting key is detected to be clicked again by the user, taking the current frame when the intercepting key is clicked again by the user as a termination frame;
intercepting video data between the starting frame and the ending frame from a playing video to obtain a video card segment to be marked, and taking timestamps of the starting frame and the ending frame as timestamp information of the video card segment to be marked;
receiving a scene label of a video card segment to be marked input by a user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
6. A video search apparatus applied to a server, the apparatus comprising:
the keyword acquisition module is used for acquiring keywords input by a user;
the first search module is used for searching videos containing a first target scene label matched with the keywords according to the attribute information of each video when the keywords input by the user contain preset condition keywords, and taking the videos as first target videos; the attribute information is obtained by counting after scene identification is carried out on each video in advance, wherein the attribute information comprises scene labels of each scene of each video and timestamp information of a card segment corresponding to each scene label in the video;
a first obtaining module, configured to obtain first target timestamp information of a card segment corresponding to the first target scene tag in the first target video;
a first determining module, configured to determine, as a search result, the identification information of the first target video and the first target timestamp information.
7. The apparatus of claim 6, further comprising:
the second searching module is used for searching a video containing a video title matched with the keyword as a second target video when the keyword input by the user does not contain a preset condition keyword;
the tag judgment module is used for judging whether each scene tag of each second target video comprises a second target scene tag matched with the user tag; if not, taking the identification information of the second target video as a search result; the user tags are obtained in advance according to scene tag statistics of videos watched by the user historically;
a second obtaining module, configured to obtain second target timestamp information of a card segment corresponding to each second target scene tag in each second target video if each scene tag of each second target video includes a second target scene tag matched with a user tag;
and the second determining module is used for determining the identification information of the second target video and the second target timestamp information as the search result.
8. The apparatus of claim 7, wherein if there are second object scene tags respectively matching the plurality of user tags,
the second determining module is specifically configured to determine a second target video and the second target timestamp information as search results according to the weights of the plurality of user tags and arrange the search results in a descending order; the weight of the user label is obtained by calculating according to the scene label statistical result of the video watched by the user in history.
9. The apparatus of claim 8, further comprising:
the display module is used for displaying the search result in the current page after the first determining module or the second determining module determines the search result;
the first detection module is used for detecting whether a user clicks the first target timestamp information, the first target video, the second target timestamp information or the second target video in the search result;
the first playing module is used for playing the first target video or the second target video when the fact that the user clicks the first target video or the second target video is detected;
and the second playing module is used for jumping to the first target video or the second target video when detecting that the user clicks the first target timestamp information or the second target timestamp information, and playing the card segment corresponding to the first target timestamp information or the card segment corresponding to the second target timestamp information according to the first target timestamp information or the second target timestamp information.
10. The apparatus of claim 6, further comprising:
the second detection module is used for detecting whether a user clicks a preset intercepting key or not in the video playing process; if yes, taking the current frame when the user clicks the intercepting key as a starting frame;
the third detection module is used for taking the current frame when the user clicks the interception key again as a termination frame when the user clicks the interception key again;
the intercepting module is used for intercepting video data between the starting frame and the ending frame from a playing video, obtaining a video card segment to be marked, and taking the time stamps of the starting frame and the ending frame as the time stamp information of the video card segment to be marked;
and the storage module is used for receiving a scene label of the video card segment to be marked input by the user, and storing the scene label input by the user and the timestamp information of the video card segment to be marked into the attribute information of the played video.
11. A server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing the communication between the processor and the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
CN201910865161.XA 2019-09-12 2019-09-12 Video searching method and device and server Pending CN110674345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910865161.XA CN110674345A (en) 2019-09-12 2019-09-12 Video searching method and device and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910865161.XA CN110674345A (en) 2019-09-12 2019-09-12 Video searching method and device and server

Publications (1)

Publication Number Publication Date
CN110674345A true CN110674345A (en) 2020-01-10

Family

ID=69077898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910865161.XA Pending CN110674345A (en) 2019-09-12 2019-09-12 Video searching method and device and server

Country Status (1)

Country Link
CN (1) CN110674345A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432282A (en) * 2020-04-01 2020-07-17 腾讯科技(深圳)有限公司 Video recommendation method and device
CN111726649A (en) * 2020-06-28 2020-09-29 百度在线网络技术(北京)有限公司 Video stream processing method, device, computer equipment and medium
CN111950360A (en) * 2020-07-06 2020-11-17 北京奇艺世纪科技有限公司 Method and device for identifying infringing user
CN112199582A (en) * 2020-09-21 2021-01-08 聚好看科技股份有限公司 Content recommendation method, device, equipment and medium
CN113286172A (en) * 2020-12-11 2021-08-20 苏州律点信息科技有限公司 Method and device for determining uploaded video and cloud server
CN113536823A (en) * 2020-04-10 2021-10-22 天津职业技术师范大学(中国职业培训指导教师进修中心) Video scene label extraction system and method based on deep learning and application thereof

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140140681A1 (en) * 2012-11-21 2014-05-22 Hon Hai Precision Industry Co., Ltd. Video content search method, system, and device
CN109189987A (en) * 2017-09-04 2019-01-11 优酷网络技术(北京)有限公司 Video searching method and device
WO2019134587A1 (en) * 2018-01-02 2019-07-11 阿里巴巴集团控股有限公司 Method and device for video data processing, electronic device, and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140140681A1 (en) * 2012-11-21 2014-05-22 Hon Hai Precision Industry Co., Ltd. Video content search method, system, and device
CN109189987A (en) * 2017-09-04 2019-01-11 优酷网络技术(北京)有限公司 Video searching method and device
WO2019134587A1 (en) * 2018-01-02 2019-07-11 阿里巴巴集团控股有限公司 Method and device for video data processing, electronic device, and storage medium

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111432282A (en) * 2020-04-01 2020-07-17 腾讯科技(深圳)有限公司 Video recommendation method and device
CN111432282B (en) * 2020-04-01 2022-01-04 腾讯科技(深圳)有限公司 Video recommendation method and device
CN113536823A (en) * 2020-04-10 2021-10-22 天津职业技术师范大学(中国职业培训指导教师进修中心) Video scene label extraction system and method based on deep learning and application thereof
CN111726649A (en) * 2020-06-28 2020-09-29 百度在线网络技术(北京)有限公司 Video stream processing method, device, computer equipment and medium
CN111950360A (en) * 2020-07-06 2020-11-17 北京奇艺世纪科技有限公司 Method and device for identifying infringing user
CN111950360B (en) * 2020-07-06 2023-08-18 北京奇艺世纪科技有限公司 Method and device for identifying infringement user
CN112199582A (en) * 2020-09-21 2021-01-08 聚好看科技股份有限公司 Content recommendation method, device, equipment and medium
CN112199582B (en) * 2020-09-21 2023-07-18 聚好看科技股份有限公司 Content recommendation method, device, equipment and medium
CN113286172A (en) * 2020-12-11 2021-08-20 苏州律点信息科技有限公司 Method and device for determining uploaded video and cloud server

Similar Documents

Publication Publication Date Title
CN106331778B (en) Video recommendation method and device
CN110674345A (en) Video searching method and device and server
CN109819284B (en) Short video recommendation method and device, computer equipment and storage medium
CN110378732B (en) Information display method, information association method, device, equipment and storage medium
CN106326391B (en) Multimedia resource recommendation method and device
JP5908930B2 (en) Hierarchy tags with community-based ratings
KR101944469B1 (en) Estimating and displaying social interest in time-based media
US9913001B2 (en) System and method for generating segmented content based on related data ranking
WO2017028624A1 (en) Method and device for processing resources
EP3873065B1 (en) Content recommendation method, mobile terminal, and server
CN109286850B (en) Video annotation method and terminal based on bullet screen
CN109753601B (en) Method and device for determining click rate of recommended information and electronic equipment
CN104469508A (en) Method, server and system for performing video positioning based on bullet screen information content
CN108073606B (en) News recommendation method and device for news recommendation
EP3403195A1 (en) Annotation of videos using aggregated user session data
CN111708909B (en) Video tag adding method and device, electronic equipment and computer readable storage medium
CN113407773A (en) Short video intelligent recommendation method and system, electronic device and storage medium
CN110309324B (en) Searching method and related device
CN109120996B (en) Video information identification method, storage medium and computer equipment
CN113626638A (en) Short video recommendation processing method and device, intelligent terminal and storage medium
CN107688587B (en) Media information display method and device
CN110569447B (en) Network resource recommendation method and device and storage medium
CN112541787A (en) Advertisement recommendation method, system, storage medium and electronic device
CN110110046B (en) Method and device for recommending entities with same name
CN106570003B (en) Data pushing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110