CN114996505A - Video data processing method and device, electronic equipment and storage medium - Google Patents

Video data processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114996505A
CN114996505A CN202210672558.9A CN202210672558A CN114996505A CN 114996505 A CN114996505 A CN 114996505A CN 202210672558 A CN202210672558 A CN 202210672558A CN 114996505 A CN114996505 A CN 114996505A
Authority
CN
China
Prior art keywords
image
searched
target
video
current object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210672558.9A
Other languages
Chinese (zh)
Inventor
蒋灿
陈超
方鸿灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority to CN202210672558.9A priority Critical patent/CN114996505A/en
Publication of CN114996505A publication Critical patent/CN114996505A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/71Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7328Query by example, e.g. a complete video frame or video sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Library & Information Science (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a video data processing method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a current object to be searched; the current object to be searched is an image input by a user or an image in a short video input by the user; extracting target identification information of a current object to be searched; based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from a database, and each searched image is determined as a target image; the database stores target identification information of each frame of image of a plurality of long videos; target identification information of each frame image of the long video is obtained by analyzing the long video to obtain each frame image of the long video, and then information extraction is carried out on each frame image of the long video to obtain the target identification information; respectively searching a target long video corresponding to the target image from each long video aiming at each target image; and feeding back each target long video to the client.

Description

Video data processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of multimedia information processing technologies, and in particular, to a video data processing method and apparatus, an electronic device, and a storage medium.
Background
With the continuous development of network technology, people can watch videos anytime and anywhere through electronic equipment such as mobile phones and flat panels. Therefore, the number of video resources available for people to watch is increasing, and it is important to enable users to quickly and effectively find videos to be watched.
When a user searches videos, the name of the input video or keywords related to the video are mainly used, so that the matched video can be found based on the input video name or keywords and fed back to the user. Or the user searches through video classification, time of showing and the like.
But currently evolving from media rapidly, users are often attracted to movie episode posters or short videos clipped from the media platform, hoping to find the full video to watch. However, at this time, the user may not be able to obtain the name or keyword of the related video, so that the video cannot be quickly and effectively found when the existing search method is used for searching.
Disclosure of Invention
Based on the defects of the prior art, the application provides a video data processing method and device, an electronic device and a storage medium, so as to solve the problem that the required video cannot be quickly retrieved in the prior art.
In order to achieve the above object, the present application provides the following technical solutions:
a first aspect of the present application provides a video data processing method, including:
acquiring a current object to be searched; the current object to be searched is an image input by a user or an image in a short video input by the user;
extracting target identification information of the current object to be searched;
based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from a database, and each searched image is determined as a target image; wherein the database stores the target identification information of each frame image of a plurality of long videos; after the target identification information of each frame image of the long video is analyzed to obtain each frame image of the long video, extracting information of each frame image of the long video;
respectively searching a target long video corresponding to the target image from each long video aiming at each target image;
and feeding back each target long video to the client.
Optionally, in the above video data processing method, the method further includes
Determining the time point of each target image in the corresponding target long video as the current playing time point of each target long video;
feeding back the current play-up time point of each target long video to the client;
responding to the operation of a user for playing any one target long video, and starting playing the target long video from the current playing time point of the target long video.
Optionally, in the above video data processing method, the acquiring a current object to be searched includes:
receiving a search object input by the user;
if the search object input by the user is a frame of image, determining the image input by the user as the current object to be searched;
if the search input by the user corresponds to the short video, analyzing the short video frame by frame to obtain each frame of image of the short video;
and sequentially determining each frame of image of the short video as the current object to be searched until a target image matched with the current object to be searched is found from a database.
Optionally, in the above video data processing method, the extracting the target identification information of the current object to be searched includes:
and calculating the current object to be searched by using a preset target algorithm to obtain the metadata of the current object to be searched and the hash value of the current object to be searched.
Optionally, in the above video data processing method, the finding at least one image matching the current object to be searched from a database based on the target identification information of the current object to be searched, and determining each found image as a target image includes:
and searching the image of which the metadata and the hash value are consistent with the current object to be searched from the database, and determining the searched image as the target image.
Optionally, in the video data processing method, the method further includes:
if the images with the metadata and the hash value consistent with the current object to be searched are not found from the database, searching each similar image with the similarity of the current object to be searched larger than a preset value from the database based on the characteristic information of the current object to be searched;
and determining each similar image as the target image.
Optionally, in the above video data processing method, the feeding back each target long video to the client includes:
sequencing all the target long videos according to the sequence of similarity between the corresponding target images and the current object to be searched from large to small to obtain a video recommendation list;
and feeding back the video recommendation list to the client.
A second aspect of the present application provides a video data processing apparatus comprising:
the object acquisition unit is used for acquiring the current object to be searched; the current object to be searched is an image input by a user or an image in a short video input by the user;
the information extraction unit is used for extracting target identification information of the current object to be searched;
the image matching unit is used for searching at least one image matched with the current object to be searched from a database based on the target identification information of the current object to be searched, and determining each searched image as a target image; wherein the database stores the target identification information of each frame image of a plurality of long videos; after the target identification information of each frame image of the long video is analyzed to obtain each frame image of the long video, extracting information of each frame image of the long video;
the video searching unit is used for searching a target long video corresponding to the target image from each long video respectively aiming at each target image;
and the first feedback unit is used for feeding back each target long video to the client.
Optionally, in the video data processing apparatus, the apparatus further comprises:
the time determining unit is used for determining the time point of each target image in the corresponding target long video as the current broadcasting starting time point of each target long video;
the second feedback unit is used for feeding back the current play-up time point of each target long video to the client;
and the playing unit is used for responding to the operation of playing any one target long video by a user and starting playing the target long video from the current playing time point of the target long video.
Optionally, in the above video data processing apparatus, the object obtaining unit includes:
a receiving unit for receiving a search object input by the user;
a first determining unit, configured to determine, if a search object input by the user is a frame of image, the image input by the user as the current object to be searched;
the analysis unit is used for analyzing the short video frame by frame to obtain each frame image of the short video if the search input by the user corresponds to the short video;
and the second determining unit is used for sequentially determining each frame of image of the short video as the current object to be searched until a target image matched with the current object to be searched is found from a database.
Optionally, in the above video data processing apparatus, the target identification information is valid metadata and a hash value, and the information extracting unit includes:
and the information calculation unit is used for calculating the current object to be searched by utilizing a preset target algorithm to obtain the metadata of the current object to be searched and the hash value of the current object to be searched.
Optionally, in the above video data processing apparatus, the image matching unit includes:
and the image matching subunit is used for searching the image of which the metadata and the hash value are consistent with the current object to be searched from the database, and determining the searched image as the target image.
Optionally, in the video data processing apparatus described above, further comprising:
an image searching unit, configured to search, if an image whose metadata and hash value are consistent with the current object to be searched is not found from the database, each similar image whose similarity to the current object to be searched is greater than a preset value from the database based on the feature information of the current object to be searched;
a third determining unit configured to determine each of the similar images as the target image.
Optionally, in the above video data processing apparatus, the first feedback unit includes:
the sequencing unit is used for sequencing the target long videos according to the sequence of the similarity between the corresponding target image and the current object to be searched from high to low to obtain a video recommendation list;
and the list feedback unit is used for feeding back the video recommendation list to the client.
A third aspect of the present application provides an electronic device comprising:
a memory and a processor;
wherein the memory is used for storing programs;
the processor is configured to execute the program, and when the program is executed, the program is specifically configured to implement the video data processing method according to any one of the above items.
A fourth aspect of the present application provides a computer storage medium storing a computer program for implementing a video data processing method as claimed in any one of the preceding claims when executed.
According to the video data processing method, after the long videos are analyzed to obtain the frame images of the long videos, information extraction is carried out on the frame images of the long videos to obtain the target identification information of the frame images of the long videos, and the target identification information is stored in a database. When the current object to be searched is obtained, extracting the target identification information of the current object to be searched. The current object to be searched is an image input by a user or an image in a short video input by the user, that is, the user can input the image or the short video to search for a website video, if the user inputs the image, the image is the current object to be searched, and if the user inputs the short video, the image in the short video is used as the current object to be searched. And then based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from the database, and each searched image is determined as a target image. Therefore, the target long video corresponding to the target image can be further searched from each long video aiming at each target image, and finally each target long video is fed back to the client. Therefore, based on the target identification information of the input image or the image in the input video, the method for searching the corresponding complete video through the input image or the short video is realized, and the accurate complete video can be quickly searched without reusing the name or the keyword of the video.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a video playing method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for acquiring a current object to be searched according to an embodiment of the present disclosure;
fig. 3 is a flowchart of another video data processing method according to another embodiment of the present application;
fig. 4 is a schematic structural diagram of a video data processing apparatus according to another embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In this application, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
An embodiment of the present application provides a video playing method, as shown in fig. 1, including the following steps:
s101, obtaining a current object to be searched.
The current object to be searched is an image input by a user or a frame image in a short video input by the user.
It should be noted that, in the embodiment of the present application, in order to enable a user to search for a complete video for playing through both an image and a short video, and the short video can be parsed into an image, in the embodiment of the present application, the search is performed based on information of the image. Therefore, if the image input by the user is an image, the current object to be searched can be directly acquired. If the input is the short video, an image is required to be extracted from the short video to be used as a current object to be searched.
Optionally, in another embodiment of the present application, as shown in fig. 2, a specific implementation manner of step S101 includes:
s201, receiving a search object input by a user.
S202, judging whether a search object input by a user is an image.
If it is determined that the search target input by the user is an image, step S203 is executed. If it is determined that the search object input by the user is not an image, that is, the search object input by the user is a short video, step S204 is executed.
S203, determining the image input by the user as the current object to be searched.
And S204, analyzing the short videos frame by frame to obtain each frame image of the short videos.
And S205, sequentially determining each frame of image of the short video as the current object to be searched until the target image matched with the current object to be searched is found from the database.
That is to say, in the embodiment of the present application, each frame image of the short video is sequentially determined as the current object to be searched, and then the subsequent steps are performed, and if the target image matched with the current object to be searched is found from the database, it may be determined that the target long video is subsequently performed, so that it is not necessary to determine other images of the short video as the current object to be searched, and then the retrieval is continued. And if the target image matched with the current object to be searched cannot be searched from the database, acquiring a new image in the short video as the current object to be searched, and continuing to search.
Optionally, the images may be sequentially selected as the current object to be searched according to the sequence of each frame of image in the short video. Either randomly or by other measurements.
Of course, the retrieval method for the short video in the present application is only one optional method, and may also be that multiple frame images of the short video are respectively determined as the current objects to be searched, corresponding target long videos are respectively retrieved, and then the largest number of occurrences in all the target long videos is fed back, or the feedback is performed after the ranking is performed according to the number of occurrences.
S102, extracting target identification information of the current object to be searched.
The target identification information refers to information for identifying an image of a pre-specified type, that is, information for effectively and uniquely identifying an image. The object to be currently searched can be matched with the video frame image in the long video based on the target identification information.
S103, based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from the database, and each searched image is determined as a target image.
Wherein the database stores target identification information of each frame image of the plurality of long videos. Target identification information of each frame image of a long video is obtained by analyzing the long video to obtain each frame image of the long video and then extracting information of each frame image of the long video.
Optionally, after each long video is analyzed frame by frame and denoised by a task in an offline big data processing mode, target identification information of each frame of image is extracted in a specified mode, and then the target identification information is stored in a database, and a corresponding index is established.
Because the image can be uniquely identified through the target identification information, the image matched with the current image to be searched can be searched from the database, namely, each image with the consistent target identification information or the similarity meeting the requirement is searched and determined as the target image, thereby facilitating the subsequent processing.
Optionally, in another embodiment of the present application, the target identification information is valid metadata and a hash value, so in this embodiment of the present application, a specific implementation manner of step S102 is:
and calculating the current object to be searched by using a preset target algorithm to obtain the metadata of the current object to be searched and the hash value of the current object to be searched.
Because the effective metadata and the hash value can uniquely identify different images, the target image which is completely consistent with the current object to be searched can be effectively ensured to be searched through the effective metadata and the hash value.
Accordingly, in the embodiment of the present application, one implementation manner of step S103 is:
and searching an image with the metadata and the hash value consistent with the current object to be searched from the database, and determining the searched image as a target image.
Since the valid metadata and the hash value are unique, in the embodiment of the present application, there is only one unique target image found.
And S104, respectively aiming at each target image, finding out a target long video corresponding to the target image from each long video.
The target long video corresponding to the target image refers to the long video containing the target image.
Optionally, after the long video is analyzed frame by frame, each frame of image of the long video and the long video are bound, so that the target long video corresponding to the target image can be found. Or storing each frame image of the same video into the same folder, and matching the name or address of the long video with the name of the folder, so that the target long video corresponding to the target image can be found. Other means may of course be used.
And S105, feeding back each target long video.
Optionally, if only one target long video is determined, only the target long video is fed back. If a plurality of target long videos are determined and each target long video is determined according to the similarity with the object to be searched, each target long video can be sequenced from high to low according to the similarity between the corresponding target image and the object to be searched and then fed back to the client. Of course, when a plurality of target long videos are obtained, feedback may be performed according to other strategies, for example, feedback based on one or more of information such as the amount of play, the number of times searched, and the showing time.
Optionally, since the user often wants to start watching the corresponding long video from the watched image or short video after watching an image or short video, in another embodiment of the present application, the method further includes: and determining the time point of each target image in the corresponding target long video as the current play-on time point of each target long video, and feeding back the current play-on time point of each target long video to the client. And responding to the operation of playing any one target long video by the user, and starting to play the target long video from the current playing time point of the target long video.
According to the video data processing method provided by the embodiment of the application, after the plurality of long videos are analyzed to obtain each frame image of the long videos, information extraction is performed on each frame image of the long videos to obtain target identification information of each frame image of the plurality of long videos, and the target identification information is stored in the database. When the current object to be searched is obtained, extracting the target identification information of the current object to be searched. The current object to be searched is an image input by a user or an image in a short video input by the user, that is, the user can input the image or the short video to search for a website video, if the user inputs the image, the image is the current object to be searched, and if the user inputs the short video, the image in the short video is used as the current object to be searched. And then based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from the database, and each searched image is determined as a target image. Therefore, the target long video corresponding to the target image can be further searched from each long video aiming at each target image, and finally each target long video is fed back to the client. Therefore, the method for searching the corresponding complete video through the input image or the short video is realized based on the target identification information of the input image or the image in the input video, and the accurate complete video can be quickly searched without reusing the name or the keyword of the video.
Another embodiment of the present application provides another video data processing method, as shown in fig. 3, including:
s301, obtaining the current object to be searched.
The current object to be searched is an image input by a user or an image in a short video input by the user;
it should be noted that, for the specific implementation of step S301, reference may be made to step S101 in the foregoing method embodiment, and details are not described here again.
S302, calculating the current object to be searched by using a preset target algorithm to obtain metadata of the current object to be searched and a hash value of the current object to be searched.
It should be noted that, in the embodiment of the present application, the target identification information is valid metadata and a hash value. Since the valid metadata and the hash value of different images are unique, the valid metadata and the hash value can search for a unique target image, and therefore in the embodiment of the present application, the valid metadata and the hash value are used as target identification information.
S303, searching the image with the metadata and the hash value consistent with the current object to be searched from the database, and determining the searched image as the target image.
The database stores metadata and hash values of each frame of image of a plurality of long videos, and can be established with corresponding indexes for searching conveniently. And after analyzing the long video to obtain each frame image of the long video, calculating each frame image of the long video by using a preset target algorithm to obtain the target identification information of each frame image of each long video.
It should be noted that the valid metadata and the hash value are unique, so that the target image cannot be found through the similarity of the target identification information, and only one consistent target image can be found, but this way can more effectively ensure the accuracy of the finally found long video.
S304, searching the target long video corresponding to the target image from each long video.
It should be noted that, for the specific implementation of step S304, reference may be made to step S104 in the foregoing method embodiment, and details are not described here again.
It should be noted that although there is only one target image, there may be a plurality of videos including the target image, so that the target long video may include a plurality of videos.
Optionally, in the case of multiple target long videos, one or more target long videos may be selected from the multiple target long videos according to a preset policy for feedback, for example, according to information such as a play amount. Of course, after step S304 is executed, the target long videos to be divided may be removed according to a preset policy, so as to reserve one or more target long videos to execute the subsequent steps.
S305, determining the time point of the target image in the corresponding target long video as the current broadcasting starting time point of each target long video.
Similarly, considering that the user usually needs to watch the long video from the input image or the short video, in the embodiment of the present application, the time point of each target image in the corresponding target long video is determined as the current playing start time point of each target long video, so as to start playing from the target image in the following.
S306, feeding back each target long video to the client, and feeding back the current play-on time point of each target long video to the client.
Alternatively, since the valid metadata and the hash value are unique, it is relatively easy to find the target image, so as to ensure that the retrieval result can be fed back to the user in such a case. Therefore, in another embodiment of the present application, after step S303 is executed, if an image whose metadata and hash value are consistent with the current object to be searched is not found from the database, based on the feature information of the current object to be searched, each similar image whose similarity to the current object to be searched is greater than a preset value is found from the database, and each similar image is determined as a target image, so that subsequent steps can be continuously executed by using the target images.
The feature information refers to image feature information other than metadata and hash values. Therefore, in the embodiment of the present application, feature information of each frame image of the long video needs to be acquired in advance. Although a unique and accurate result cannot be directly obtained through the characteristic information compared with the metadata and the hash value, the long video required to be retrieved by the corresponding user can be ensured to be obtained and fed back to the user.
Correspondingly, in this embodiment of the application, feeding back each target long video to the client in step S306 may specifically include: and sequencing all the target long videos according to the sequence of similarity between the corresponding target images and the current object to be searched from large to small to obtain a video recommendation list, and feeding back the video recommendation list to the client.
After the target long videos are sequenced according to the similarity, the user can find the required long video conveniently. Optionally, when the current play-starting time point of the target long video needs to be fed back, the current play-starting time point of the target long video may be added to the video recommendation list so as to be fed back to the client together.
S307, responding to the operation of playing any one target long video by the user, and starting playing the target long video from the current playing time point of the target long video.
Specifically, for each fed back target long video, when a user clicks and plays any one target long video, the playing progress of the target long video is directly jumped to the current playing starting time point of the target long video, and then the target long video is played.
Another embodiment of the present application provides a video data processing apparatus, as shown in fig. 4, including:
an object obtaining unit 401, configured to obtain a current object to be searched.
The current object to be searched is an image input by a user or an image in a short video input by the user.
An information extraction unit 402, configured to extract target identification information of the current object to be searched.
An image matching unit 403, configured to find at least one image that matches the current object to be searched from the database based on the target identification information of the current object to be searched, and determine each found image as a target image.
Wherein the database stores target identification information of each frame image of the plurality of long videos. The target identification information of each frame image of the long video is obtained by analyzing the long video to obtain each frame image of the long video and then extracting information of each frame image of the long video.
And a video searching unit 404, configured to search, for each target image, a target long video corresponding to the target image from each long video.
A first feedback unit 405, configured to feed back each target long video to the client.
Optionally, in a video data processing apparatus provided in another embodiment of the present application, further including:
and the time determining unit is used for determining the time point of each target image in the corresponding target long video as the current broadcasting starting time point of each target long video.
And the second feedback unit is used for feeding back the current play-on time point of each target long video to the client.
And the playing unit is used for responding to the operation of playing any one target long video by the user and starting playing the target long video from the current playing time point of the target long video.
Optionally, in a video data processing apparatus provided in another embodiment of the present application, an object obtaining unit includes:
and the receiving unit is used for receiving a search object input by a user.
The first determining unit is used for determining the image input by the user as the current object to be searched if the search object input by the user is a frame of image.
And the analysis unit is used for analyzing the short videos frame by frame to obtain each frame of image of the short videos if the search input by the user corresponds to the short videos.
And the second determining unit is used for sequentially determining each frame of image of the short video as the current object to be searched until the target image matched with the current object to be searched is found from the database.
Optionally, in a video data processing apparatus provided in another embodiment of the present application, the target identification information is valid metadata and a hash value, and the information extracting unit includes:
and the information calculation unit is used for calculating the current object to be searched by utilizing a preset target algorithm to obtain the metadata of the current object to be searched and the hash value of the current object to be searched.
Optionally, in the above video data processing apparatus, the image matching unit includes:
and the image matching subunit is used for searching the image of which the metadata and the hash value are consistent with the current object to be searched from the database and determining the searched image as the target image.
Optionally, in a video data processing apparatus provided in another embodiment of the present application, further including:
and the image searching unit is used for searching out each similar image with the similarity greater than a preset value with the current object to be searched from the database based on the characteristic information of the current object to be searched if the image with the metadata and the hash value consistent with the current object to be searched is not searched from the database.
And a third determining unit for determining each similar image as the target image.
Optionally, in the video data processing apparatus described above, the first feedback unit includes:
and the sequencing unit is used for sequencing all the target long videos according to the sequence from the big similarity to the small similarity of the corresponding target images and the current object to be searched to obtain a video recommendation list.
And the list feedback unit is used for feeding back the video recommendation list to the client.
It should be noted that, for the specific working processes of each unit provided in the foregoing embodiments of the present application, corresponding steps in the foregoing method embodiments may be referred to accordingly, and are not described herein again.
Another embodiment of the present application provides an electronic device, as shown in fig. 5, including:
a memory 501 and a processor 502.
The memory 501 is used for storing programs.
The processor 502 is adapted to execute a program of the memory 501, which when executed is particularly adapted to implement the video data processing method as provided in any of the embodiments described above.
Another embodiment of the present application provides a computer storage medium for storing a computer program, which when executed, is used to implement the video data processing method provided in any one of the above embodiments.
Computer storage media, including permanent and non-permanent, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of processing video data, comprising:
acquiring a current object to be searched; the current object to be searched is an image input by a user or an image in a short video input by the user;
extracting target identification information of the current object to be searched;
based on the target identification information of the current object to be searched, at least one image matched with the current object to be searched is searched from a database, and each searched image is determined as a target image; wherein the database stores the target identification information of each frame image of a plurality of long videos; after the target identification information of each frame image of the long video is analyzed to obtain each frame image of the long video, extracting information of each frame image of the long video;
respectively searching a target long video corresponding to the target image from each long video aiming at each target image;
and feeding back each target long video to the client.
2. The method of claim 1, further comprising:
determining the time point of each target image in the corresponding target long video as the current play-on time point of each target long video;
feeding back the current play-up time point of each target long video to the client;
responding to the operation of a user for playing any one target long video, and starting playing the target long video from the current playing time point of the target long video.
3. The method according to claim 1, wherein the obtaining the current object to be searched comprises:
receiving a search object input by the user;
if the search object input by the user is a frame of image, determining the image input by the user as the current object to be searched;
if the search input by the user corresponds to the short video, analyzing the short video frame by frame to obtain each frame image of the short video;
and sequentially determining each frame of image of the short video as the current object to be searched until a target image matched with the current object to be searched is found from a database.
4. The method according to claim 1, wherein the target identification information is valid metadata and a hash value, and the extracting the target identification information of the current object to be searched comprises:
and calculating the current object to be searched by using a preset target algorithm to obtain the metadata of the current object to be searched and the hash value of the current object to be searched.
5. The method according to claim 4, wherein the finding at least one image matching the current object to be searched from a database based on the target identification information of the current object to be searched, and determining each found image as a target image comprises:
and searching the image of which the metadata and the hash value are consistent with the current object to be searched from the database, and determining the searched image as the target image.
6. The method of claim 5, further comprising:
if the images with the metadata and the hash value consistent with the current object to be searched are not searched from the database, searching each similar image with the similarity of the current object to be searched, which is greater than a preset value, from the database based on the characteristic information of the current object to be searched;
and determining each similar image as the target image.
7. The method of claim 6, wherein the feeding back each target long video to the client comprises:
sequencing all the target long videos according to the sequence of similarity between the corresponding target images and the current object to be searched from large to small to obtain a video recommendation list;
and feeding back the video recommendation list to the client.
8. A video data processing apparatus, comprising:
the object acquisition unit is used for acquiring the current object to be searched; the current object to be searched is an image input by a user or an image in a short video input by the user;
the information extraction unit is used for extracting target identification information of the current object to be searched;
the image matching unit is used for searching at least one image matched with the current object to be searched from a database based on the target identification information of the current object to be searched and determining each searched image as a target image; wherein the database stores the target identification information of each frame image of a plurality of long videos; after the target identification information of each frame image of the long video is analyzed to obtain each frame image of the long video, extracting information of each frame image of the long video;
the video searching unit is used for searching a target long video corresponding to the target image from each long video respectively aiming at each target image;
and the first feedback unit is used for feeding back each target long video to the client.
9. An electronic device, comprising:
a memory and a processor;
wherein the memory is used for storing programs;
the processor is adapted to execute the program, which when executed is particularly adapted to implement the video data processing method of any of claims 1 to 7.
10. A computer storage medium storing a computer program which, when executed, implements the video data processing method of any one of claims 1 to 7.
CN202210672558.9A 2022-06-15 2022-06-15 Video data processing method and device, electronic equipment and storage medium Pending CN114996505A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210672558.9A CN114996505A (en) 2022-06-15 2022-06-15 Video data processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210672558.9A CN114996505A (en) 2022-06-15 2022-06-15 Video data processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114996505A true CN114996505A (en) 2022-09-02

Family

ID=83035836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210672558.9A Pending CN114996505A (en) 2022-06-15 2022-06-15 Video data processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114996505A (en)

Similar Documents

Publication Publication Date Title
US8370358B2 (en) Tagging content with metadata pre-filtered by context
US10831814B2 (en) System and method for linking multimedia data elements to web pages
US8095540B2 (en) Identifying superphrases of text strings
CN111046235B (en) Method, system, equipment and medium for searching acoustic image archive based on face recognition
CN108520046B (en) Method and device for searching chat records
EP2234024A1 (en) Context based video finder
CN111327955B (en) User portrait based on-demand method, storage medium and smart television
MX2013005056A (en) Multi-modal approach to search query input.
CN111294660B (en) Video clip positioning method, server, client and electronic equipment
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
CN110674345A (en) Video searching method and device and server
CN113407773A (en) Short video intelligent recommendation method and system, electronic device and storage medium
RU2568276C2 (en) Method of extracting useful content from mobile application setup files for further computer data processing, particularly search
US11010398B2 (en) Metadata extraction and management
CN107748772B (en) Trademark identification method and device
CN111224923A (en) Detection method, device and system for counterfeit websites
US20170060862A1 (en) Method and system for content retrieval based on rate-coverage optimization
EP3706014A1 (en) Methods, apparatuses, devices, and storage media for content retrieval
CN114003799A (en) Event recommendation method, device and equipment
CN114996505A (en) Video data processing method and device, electronic equipment and storage medium
Raimond et al. Using the past to explain the present: interlinking current affairs with archives via the semantic web
CN113486212A (en) Search recommendation information generation and display method, device, equipment and storage medium
CN114020963A (en) Method and device for searching similar or repeated videos
CN109474832B (en) Information searching and sorting method, intelligent terminal and storage medium
CN113537215A (en) Method and device for labeling video label

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination