CN116049490A - Material searching method and device and electronic equipment - Google Patents

Material searching method and device and electronic equipment Download PDF

Info

Publication number
CN116049490A
CN116049490A CN202310115488.1A CN202310115488A CN116049490A CN 116049490 A CN116049490 A CN 116049490A CN 202310115488 A CN202310115488 A CN 202310115488A CN 116049490 A CN116049490 A CN 116049490A
Authority
CN
China
Prior art keywords
video
videos
materials
target
search
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310115488.1A
Other languages
Chinese (zh)
Inventor
赵铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202310115488.1A priority Critical patent/CN116049490A/en
Publication of CN116049490A publication Critical patent/CN116049490A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/7867Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, title and artist information, manually generated time, location and usage information, user ratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/738Presentation of query results

Abstract

The embodiment of the application discloses a material searching method, a material searching device and electronic equipment. One embodiment of the method comprises the following steps: determining materials hit by the search word in response to receiving the search word; acquiring a plurality of groups of videos corresponding to the materials, wherein the using effects of the materials in each group of videos are the same, and each group of videos comprises one or more target videos; and displaying video clips corresponding to each group of target videos in the search result, wherein the video clips are used for representing the using effect of the materials. The implementation simplifies the searching process, shortens the searching time and improves the diversity of the searching results.

Description

Material searching method and device and electronic equipment
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a material searching method, a material searching device and electronic equipment.
Background
In the existing search scene, after the user inputs a material search word, the returned search result is usually a material function entry, and the user needs to open a material function to experience and then can clearly determine the using effect of the material, so that the user needs to execute multi-step operation in the material search process to see the final using effect of the material, and the search process is complicated and the search time is long.
Disclosure of Invention
This disclosure is provided in part to introduce concepts in a simplified form that are further described below in the detailed description. This disclosure is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In a first aspect, an embodiment of the present disclosure provides a material searching method, including: determining materials hit by the search word in response to receiving the search word; acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
In a second aspect, an embodiment of the present disclosure provides a material search apparatus, including: a determining unit, configured to determine, in response to receiving a search word, a material hit by the search word; the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of groups of videos corresponding to the materials, the use effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and the display unit is used for displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the material search method as described in the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium having stored thereon a computer program which, when executed by a processor, implements the steps of the material search method according to the first aspect.
According to the material searching method, the material searching device and the electronic equipment, materials hit by the search words are determined through responding to the received search words; then, acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and then, displaying video fragments corresponding to each group of target videos in the search results. By the method, a user can see the using effect of the materials without executing multi-step operation in the material searching process, so that the searching process is simplified, the searching time is shortened, and in addition, the searching result in the method comprises a plurality of video clips with different using effects of the materials, so that the diversification of the searching result is improved.
Drawings
The above and other features, advantages, and aspects of embodiments of the present disclosure will become more apparent by reference to the following detailed description when taken in conjunction with the accompanying drawings. The same or similar reference numbers will be used throughout the drawings to refer to the same or like elements. It should be understood that the figures are schematic and that elements and components are not necessarily drawn to scale.
FIG. 1 is a flow chart of one embodiment of a material search method according to the present disclosure;
FIG. 2 is a flow chart of yet another embodiment of a material search method according to the present disclosure;
FIG. 3 is a schematic illustration of one application scenario of a material search method according to the present disclosure;
FIG. 4 is a flow chart of another embodiment of a material search method according to the present disclosure;
FIG. 5 is a schematic illustration of yet another application scenario of a material search method according to the present disclosure;
FIG. 6 is a flow chart of yet another embodiment of a material search method according to the present disclosure;
FIG. 7 is a schematic diagram of another application scenario of a material search method according to the present disclosure;
fig. 8 is a schematic structural view of an embodiment of a material search apparatus according to the present disclosure;
FIG. 9 is an exemplary system architecture diagram in which various embodiments of the present disclosure may be applied;
Fig. 10 is a schematic diagram of a computer system suitable for use in implementing embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure have been shown in the accompanying drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but are provided to provide a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the present disclosure are for illustration purposes only and are not intended to limit the scope of the present disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order and/or performed in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "including" and variations thereof as used herein are intended to be open-ended, i.e., including, but not limited to. The term "based on" is based at least in part on. The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments. Related definitions of other terms will be given in the description below.
It should be noted that the terms "first," "second," and the like in this disclosure are merely used to distinguish between different devices, modules, or units and are not used to define an order or interdependence of functions performed by the devices, modules, or units.
It should be noted that references to "one", "a plurality" and "a plurality" in this disclosure are intended to be illustrative rather than limiting, and those of ordinary skill in the art will appreciate that "one or more" is intended to be understood as "one or more" unless the context clearly indicates otherwise.
The names of messages or information interacted between the various devices in the embodiments of the present disclosure are for illustrative purposes only and are not intended to limit the scope of such messages or information.
Referring to fig. 1, a flow 100 of one embodiment of a material search method according to the present disclosure is shown. The material searching method comprises the following steps:
in step 101, in response to receiving the search term, material hit by the search term is determined.
In this embodiment, the execution subject of the material search method may detect whether a search term is received. Here, the user may search through search terms in a target video application (e.g., short video social software, video editing tools, etc.).
If a search term is received, the execution body may determine a material hit by the search term. The above materials are understood to be elements that may be used in making video, such as filters. Here, the execution subject may determine the material hit by the search term by retrieving a recall. Specifically, the material names of the materials in the preset material library can be obtained; then, determining the text similarity between the material name of each material and the search word according to each material in the material library; and then, selecting the material with highest text similarity from the material library as the material hit by the search word.
It should be noted that, the executing body may determine whether the highest text similarity is greater than or equal to a preset text similarity threshold; if the text similarity threshold is greater than or equal to the text similarity threshold, the search term may be considered to hit the material, and step 102 is continued at this time; if the text similarity threshold is smaller than the text similarity threshold, the search term may be considered to miss material, and step 102 is not continued.
Step 102, obtaining a plurality of groups of videos corresponding to the materials.
In this embodiment, the executing body may acquire a plurality of groups of videos corresponding to the material. Each set of videos includes one or more target videos. The target video may be a video meeting a preset condition, for example, a video with a play amount of the video being greater than a preset play amount threshold, and a video with an average play time of the video being greater than a preset time period threshold.
Here, the effect of using the material is the same in each of the video groups. As an example, if the search term hits the "new" effect, the "new" effect in video 1 is presented as "firework", the "new" effect in video 2 is presented as "good" word, the "new" effect in video 3 is presented as "firework", the "new" effect in video 4 is presented as "happiness and wealth", the "new" effect in video 5 is presented as "good" word, the "new" effect in video 6 is presented as "happiness and wealth", then video 1 and video 3 are in a group, video 2 and video 5 are in a group, and video 4 and video 6 are in a group.
And step 103, displaying video clips corresponding to each group of target videos in the search results.
In this embodiment, the executing body may display video clips corresponding to each set of the target video in the search result, where the video clips are generally used to represent the usage effect of the material. The video clips can be displayed in a moving picture form, namely, moving pictures of the video clips corresponding to each group of target videos are played simultaneously in the search result. A motion picture may also be referred to as a moving picture, which refers to a picture that produces a certain dynamic effect when a particular set of still images is switched at a specified frequency.
Here, after a plurality of sets of videos corresponding to the material are acquired, the execution body may analyze the acquired videos to determine a start-stop time (i.e., a start time and an end time) of using the material in the videos; then, according to the start-stop time, a video fragment using the material can be intercepted from the acquired video.
As an example, the execution body may arrange the video clips in one or more rows, for example, if the number of video clips is 3, 3 video clips may be arranged in one row, and if the number of video clips is 6, 6 video clips may be arranged in two rows. The ranked video clips may then be placed at the very top of the search results so that the user can see the video clips first.
Here, the executing body may sort the video clips based on at least one of an amount of interaction, an average playing time length, and a number of playing times of the video from which the video clips are derived.
The method provided by the embodiment of the disclosure determines materials hit by the search word in response to receiving the search word; then, acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and then, displaying video fragments corresponding to each group of target videos in the search results. The method carries out multi-element integration and precise matching on the resources under different retrieval intentions, and can simultaneously play a plurality of groups of multimedia contents by aggregating the multimedia resources taking the picture as the leading, so that a user can see the using effect of the materials without executing multi-step operation in the material searching process, thereby simplifying the searching process and shortening the searching time. In addition, multiple groups of videos grouped according to the using effect of the materials are obtained, and video fragments corresponding to the groups of videos are displayed in the search result, so that the search result comprises multiple video fragments with different using effects of the materials, and the diversity of the search result is improved.
In some optional implementations, the executing entity may present video segments corresponding to each set of the target videos in the search result in the following manner: the executing body may score the video clips corresponding to each group of the target video based on the video information of the video from which the video clips are derived. The video information may include at least one of: comment information, material usage conversion rate, and video quality of video clips using the material. The above-described material usage conversion rate is typically a ratio of the number of people searching for or using the above-described material after the user views the video to the number of viewers of the video. The above-described comment information may include a user's comment on a material used in a video. The executing body can determine the video clips using the materials from the video, and then can measure the video quality of the video clips by using video quality evaluation, wherein the video quality evaluation refers to sensing, measuring and evaluating the change and distortion of the video image information with the same contents of the two bodies in a subjective and objective mode.
As an example, the executing body may determine the number of positive evaluations of the user for the material used in the video in the comment information of the video, and then may find a score corresponding to the number of positive evaluations from a preset first relationship table as the first score. The first relationship table may be used to characterize a correspondence between the number of positive evaluations and the score.
The executing body may search the score corresponding to the material usage conversion rate from the preset second relation table as the second score. The second relationship table may be used to characterize a correspondence between material usage conversion and scores.
The executing body may search a score corresponding to the video quality evaluation from a preset third relation table as a third score. The third relationship table may be used to characterize a correspondence between video quality ratings and scores.
The execution body may determine any one of the first score, the second score, and the third score as a score of the video clip, may determine a sum of the first score, the second score, and the third score as a score of the video clip, and may determine a weighted average of the first score, the second score, and the third score as a score of the video clip.
And then, the video clips can be sequenced according to the sequence from the high score to the low score, and a sequencing result is obtained. The above-described ranking results are used to indicate the ranking of the individual video clips.
The video clips may then be presented in the search results according to the ranking results. For example, the video clips may be displayed in one or more rows. In this way, the display effect of the video clip can be improved.
In some alternative implementations, the video segments may have material identifiers presented thereon, where the material identifiers may include material names, so that the material names of the materials used by the currently displayed video segments may be described.
In some alternative implementations, the video clips described above may be generated by: the execution subject may input the target video into a pre-trained material detection model to obtain a start-stop time of the target video using the material. The material detection model can be used for representing the corresponding relation between the video and the start-stop time of using the material in the video. Then, according to the start-stop time, a video clip using the material can be cut out from the target video. As an example, if the start-stop time of the material detection model output is: 1 minute 20 seconds to 1 minute 50 seconds, a video clip of 1 minute 20 seconds to 1 minute 50 seconds can be cut out from the target video. In this way, the video clips using the materials in the video can be more accurately identified and intercepted.
In some alternative implementations, the execution entity may determine the material hit by the search term by: the execution body can acquire keywords corresponding to materials in a preset material library. The keywords corresponding to the material may also be referred to as material tags. And determining the matching degree between the keyword corresponding to each material and the search word for each material in the material library. As an example, the keyword corresponding to the material and the search term may be input into a pre-trained matching degree prediction model, so as to obtain the matching degree between the keyword corresponding to the material and the search term. Then, based on the matching degree, a target material can be selected from the material library as a material hit by the search term. As an example, the material corresponding to the maximum matching degree may be selected from the material library as the material hit by the search term; and selecting materials with the matching degree larger than a preset matching degree threshold value from the material library as the materials hit by the search word. In this way, the material hit by the search term can be determined more accurately.
In some optional implementations, after determining the material hit by the search term, the execution body may determine whether the number of materials hit by the search term is greater than a preset number threshold (e.g., 10); if the number of the hit materials of the search word is greater than the number threshold, the execution subject may score the hit materials based on the material information of the materials. The material information may include at least one of: the contribution amount and the creation time corresponding to the materials. In general, the larger the contribution amount corresponding to the material is, the higher the score of the material is; the closer the creation time of the material, the higher the score of the material. And then, selecting the threshold number of materials from the hit materials according to the order of the scores from high to low, and re-determining the selected materials as the materials hit by the search word. By the method, more proper materials can be selected as hit materials under the condition that the number of hit materials is large.
In some alternative implementations, the material may include at least one of: special effects, templates, transitions, stickers, and hot spot themes. Special effects are usually created by computer software for adding picture effects to a captured video. Templates may also be referred to as video frames, and users may automatically generate their own videos by uploading images and music. Transition generally refers to a scene-to-scene transition or transition in video. Stickers generally refer to a patterned decoration added to video. The hot spot topic may be content of comparative public interest, such as internal marry dance, live video, etc. By setting the material to at least one of special effects, templates, transition, stickers and hot spot topics, the application scene of the material searching method can be wider.
Referring to fig. 2, a flow 200 of yet another embodiment of a material search method is shown. The material searching method flow 200 includes the following steps:
in response to receiving the search term, material hit by the search term is determined, step 201.
In this embodiment, step 201 may be performed in a similar manner to step 101, and will not be described here.
Step 202, a plurality of videos using a material are acquired.
In this embodiment, the execution body may acquire a plurality of videos using the material.
Here, after a user issues a video by using a certain material, the video is added to a video set corresponding to the material, so as to generate a corresponding relationship between each material and the video.
And 203, grouping the videos according to the using effect of the materials in the videos.
In this embodiment, the executing body may group the plurality of videos acquired in step 202 according to the use effect of the material in the videos, so as to group the videos with the same use effect of the material into a group. Here, the execution subject may group a plurality of videos by clustering, and group videos having the same use effect of the material into one category.
Step 204, selecting a target video from the group of videos for each group of videos.
In this embodiment, for each group of videos obtained by grouping, the execution subject may select a target video from the group of videos.
As an example, the execution subject may arbitrarily select a preset number (e.g., 2) of videos from the set of videos as the target video.
As another example, the executing subject may also select the target video from the set of videos based on video information of the videos. The video information may include at least one of: the interaction amount of the video, the average playing time length of the video and the playing times of the video. The amount of interaction may be determined by at least one of a praise amount, a comment amount, a collection amount, and a forwarding amount of the video. For example, the weighted average of the praise amount, the comment number, the collection amount, and the transfer amount may be used, or the sum of the praise amount, the comment number, the collection amount, and the transfer amount may be used.
Specifically, for each video in the set of videos, the executing body may score the video by using at least one of an interaction amount, an average playing duration, and a playing number of the video; then, a preset number of videos from the group of videos can be selected as target videos according to the order of the scores from high to low.
And 205, displaying video clips corresponding to each group of target videos in the search results.
In this embodiment, step 205 may be performed in a similar manner to step 103, and will not be described herein.
As can be seen from fig. 2, compared with the embodiment corresponding to fig. 1, the flow 200 of the material searching method in this embodiment shows a step of grouping a plurality of videos using the material according to the use effect of the material in the videos, and selecting a target video from each group of videos. Therefore, the target video can be more accurately selected according to the scheme described by the embodiment, and the diversity of the display result is further improved.
In some alternative implementations, the execution body may group the plurality of videos by: the execution body may input the plurality of videos into a pre-trained video grouping model to obtain a video grouping result. The video grouping model may be a convolutional neural network (Convolutional Neural Networks, CNN) that performs grouping functions by analyzing image frames in a video. In this way video grouping can be better achieved.
In some alternative implementations, the executing entity may select the target video from the set of videos by: the execution subject may select a preset number of videos from the set of videos as the target videos based on the video information of the set of videos. The video information may include at least one of: comment information, material usage conversion rate, and video quality of video clips using the material. The above-described material usage conversion rate is typically a ratio of the number of people searching for or using the above-described material after the user views the video to the number of viewers of the video. For example, if the number of viewers of the video a is 100, and 20 persons perform a search operation on the material used in the video a after 100 persons view the video a, the material use conversion rate of the video a is 20%. The above-described comment information may include a user's comment on a material used in a video. The executing body can determine the video clips using the materials from the video, and then can measure the video quality of the video clips by using video quality evaluation, wherein the video quality evaluation refers to sensing, measuring and evaluating the change and distortion of the video image information with the same contents of the two bodies in a subjective and objective mode.
Here, the executing body may score the videos in the group of videos using at least one of comment information of the videos, a material use conversion rate, and video quality of a video clip using the material; then, a preset number of videos from the group of videos can be selected as target videos according to the order of the scores from high to low.
As an example, the executing body may determine the number of positive evaluations of the user for the material used in the video in the comment information of the video, and then may find a score corresponding to the number of positive evaluations from a preset first relationship table as the first score. The first relationship table may be used to characterize a correspondence between the number of positive evaluations and the score.
The executing body may search the score corresponding to the material usage conversion rate from the preset second relation table as the second score. The second relationship table may be used to characterize a correspondence between material usage conversion and scores.
The executing body may search a score corresponding to the video quality evaluation from a preset third relation table as a third score. The third relationship table may be used to characterize a correspondence between video quality ratings and scores.
The execution body may determine any one of the first score, the second score, and the third score as a score of the video, determine a sum of the first score, the second score, and the third score as a score of the video, and determine a weighted average of the first score, the second score, and the third score as a score of the video.
In this way, a more suitable video can be selected for display.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the material search method according to the present embodiment. In the application scenario of fig. 3, as shown by icon 301, a user inputs "ai new special effects" in the search box of the short video social software and searches; thereafter, the displayed "ai Xinchun special effects" video clip is shown as icon 302.
With further reference to fig. 4, a flow 400 of another embodiment of a material search method is shown. The material searching method flow 400 includes the following steps:
in response to receiving the search term, at least two stories hit by the search term are determined, step 401.
In this embodiment, in response to receiving a search word, the execution body of the material search method may determine a material hit by the search word.
Specifically, the executing body may obtain a material name of a material in a preset material library; then, determining the text similarity between the material name of each material and the search word according to each material in the material library; and then, selecting the materials with the text similarity larger than a preset text similarity threshold value from the material library as the materials hit by the search word. If the text similarity is greater than the text similarity threshold, determining that the search word hits at least two materials.
Step 402, for each material in at least two materials, acquiring multiple groups of videos corresponding to the material, and selecting a target video segment from video segments corresponding to target videos of each group corresponding to the material as a cover video segment.
In this embodiment, for each of the at least two materials, the executing body may acquire multiple sets of videos corresponding to the material. Each set of videos includes one or more target videos. The target video may be a video meeting a preset condition, for example, a video with a play amount of the video being greater than a preset play amount threshold, and a video with an average play time of the video being greater than a preset time period threshold. The use effect of the materials in each group of videos is the same.
Then, the executing body may select a target video clip from the video clips corresponding to the groups of target videos corresponding to the material as a cover video clip. A cover video clip can be understood as a video clip that is exposed.
Here, the target video clip may be determined based on at least one of comment information of a video from which the video clip is derived, a material usage conversion rate, and a video quality of the video clip using the material. For example, the video segments corresponding to the material may be scored based on at least one of comment information of the video from which the video segments are derived, a material usage conversion rate, and video quality of the video segments using the material, and the video segment with the highest score is selected from the video segments corresponding to the material as the cover video segment.
And step 403, displaying the cover video clips corresponding to the materials in the search results.
In this embodiment, the executing body may display the cover video clips corresponding to the respective materials in the search result. If the search word hits two materials, two cover video clips can be displayed. If the search word hits three materials, three cover video clips can be displayed. The cover video clips can be displayed in a moving picture form, namely, the cover video clips corresponding to the materials are played in the search results at the same time.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 1, the flow 400 of the material searching method in this embodiment represents the step of displaying the cover video clip corresponding to each material when the search word hits a plurality of materials. Therefore, the scheme described in the embodiment can display the search result more succinctly when the search word hits a plurality of materials.
In some optional implementations, the executing body may detect whether a user performs a preset operation on the cover video clip. Such operations may include, but are not limited to: long press operation and preset times of click operation. If it is detected that the user performs the preset operation on the cover video clip, the execution body may display other video clips, for example, the user may jump to another display page to display the other video clips. The other video clips may be other video clips of the material indicated by the operated cover video clip, that is, other shooting effects of the material are displayed. In this way, other photographing effects of the material of interest to the user can be more exhibited.
As an example, the execution body may present other video clips in a floating layer manner. The floating layer integrates a plurality of operation functions into one space, and a user pops up the floating layer by clicking the function control, so that the functions in the clicking floating layer continue to be pushed. The float layer is typically a black translucent mask or a white bottom shadow to distinguish from the interface. By means of the floating layer, the display effect of other shooting effects of the materials of interest to the user is achieved.
With continued reference to fig. 5, fig. 5 is a schematic diagram of yet another application scenario of the material search method according to the present embodiment. In the application scenario of fig. 5, as shown by icon 501, the user enters "baby bottle mask special effects" in the search box of the short video social software and searches. Then, the search term of 'milk bottle mask special effect' is determined to hit two special effects of 'five milk bottle mask special effects' and 'pink milk bottle mask special effects'. Then, the cover video clips corresponding to the five milk bottle mask special effects and the cover video clips corresponding to the pink milk bottle mask special effects are displayed, as shown by an icon 502. If the user performs the long-press operation on the cover video clip corresponding to the special effect of the pink feeding bottle mask, other video clips corresponding to the special effect of the pink feeding bottle mask can be displayed, as shown by an icon 503.
With continued reference to fig. 6, a flow 600 of yet another embodiment of a material search method is shown. The material searching method 600 includes the following steps:
in response to receiving the search term, a determination is made of the material that the search term hit, step 601.
Step 602, obtaining a plurality of groups of videos corresponding to the materials.
In this embodiment, steps 601-602 may be performed in a similar manner to steps 101-102, and will not be described here.
In step 603, the video clips are disposed on the inner layer of the layer and the material marks are disposed on the outer layer of the layer using layer stacking technology.
In this embodiment, the execution body of the material searching method may form two layers, that is, an inner layer and an outer layer, by using a layer stacking technique, and the video clip may be disposed on the inner layer of the layer, and the material identifier may be disposed on the outer layer of the layer. The material identifier may include a material name and/or a material logo.
Here, the material identifier may be disposed on a target area of an outer layer of the layer, where the target area is typically an area where there is less occlusion of the video content, e.g., below the layer.
Step 604 extracts the average color from the video clip as the background color of the inner layer.
In this embodiment, the execution body may extract an average color from the video clip as a background color of the inner layer. For each video frame of the video clip, an average value of pixel values of each pixel point in the video frame may be determined, so as to obtain an average value sequence corresponding to the video clip as a background color of the video clip.
Step 605, connecting the inner layer and the outer layer with gradient colors by using the background color of the inner layer and the background color of the outer layer, and generating a material map.
In this embodiment, the execution body may connect the inner layer and the outer layer with a gradient color using a background color of the inner layer and a background color of the outer layer, that is, may transition a joint region between the inner layer and the outer layer from the background color of the inner layer to the background color of the outer layer, thereby generating the material map.
Here, the background color of the outer layer is generally fixed, for example, white.
And step 606, displaying the material images corresponding to the target videos in each group in the search results.
In this embodiment, the execution body may display the material graphs corresponding to the respective groups of target videos in the search result.
As can be seen from fig. 6, compared with the embodiment corresponding to fig. 1, the flow 600 of the material searching method in this embodiment shows the steps of synthesizing the video clip and the material identifier by using the layer stacking technology, fusing the connection portion of the video clip and the material identifier with the gradient color, and setting one end of the gradient color as the average color of the video clip. Therefore, the scheme described by the embodiment can ensure that the display of the material identification on the video clip is not abrupt.
In some alternative implementations, the material identifier may include a capture identifier, where the capture identifier is generally used to indicate capturing using the same type of material as the video clip. For example, the shooting identifier may be a "clap same type" identifier. The execution body may detect whether the user triggers the photographing identification, for example, a click operation, a long press operation, etc. If the user is detected to trigger the shooting identification, the execution main body can jump to a video shooting interface, and the user can shoot by using the same type of material as the triggered video clip on the video shooting interface. It should be noted that, in the video shooting interface, the user does not need to select the same type of material, and can directly realize shooting by using the same type of material, so that the step of shooting the video by using the same type of material by the user is simplified.
With further reference to fig. 7, fig. 7 is a schematic diagram of another application scenario of the material search method according to the present embodiment. In the application scenario of fig. 7, the feeding bottle mask special effect video clip is set on the inner layer of the layer, and the material mark as shown by the icon 701 is set on the outer layer of the layer, so as to generate the material map. Here, the material identifier 701 includes: the material names are logo of milk bottle mask and mark of clapping same.
With further reference to fig. 8, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of a material searching apparatus, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 1, and the apparatus may be specifically applied to various electronic devices.
As shown in fig. 8, the material search apparatus 800 of the present embodiment includes: a determining unit 801, an acquiring unit 802, and a presenting unit 803. Wherein, the determining unit 801 is configured to determine, in response to receiving a search term, a material hit by the search term; the obtaining unit 802 is configured to obtain a plurality of groups of videos corresponding to the materials, where the use effects of the materials in each group of videos are the same, and each group of videos includes one or more target videos; the display unit 803 is configured to display, in the search result, video segments corresponding to each set of the target video, where the video segments are used to characterize a use effect of the material.
In the present embodiment, specific processes of the determining unit 801, the acquiring unit 802, and the presenting unit 803 of the material searching apparatus 800 may refer to steps 101, 102, and 103 in the corresponding embodiment of fig. 1.
In some alternative implementations, the obtaining unit 802 may be further configured to group the plurality of videos by: and inputting the videos into a pre-trained video grouping model to obtain a video grouping result.
In some alternative implementations, the obtaining unit 802 may be further configured to select the target video from the set of videos by: selecting a preset number of videos from the group of videos as target videos based on video information of the group of videos, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material.
In some optional implementations, the presenting unit 803 may be further configured to present video segments corresponding to each set of the target videos in the search result in the following manner: scoring video clips corresponding to each group of target videos based on video information of the video from which the video clips are derived, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material; sequencing the video clips according to the sequence from high score to low score to obtain a sequencing result; and displaying the video clips in the search results according to the sorting results.
In some optional implementations, the search term hits at least two stories; and the obtaining unit 802 and the displaying unit 803 may be further configured to obtain a plurality of sets of videos corresponding to the materials, and display video segments corresponding to each set of the target videos in the search result in the following manner: for each material in the at least two materials, acquiring a plurality of groups of videos corresponding to the material, and selecting a target video segment from video segments corresponding to each group of target videos corresponding to the material as a cover video segment; and displaying the cover video clips corresponding to the materials in the search results.
In some alternative implementations, the material search device 800 may further include other video clip presentation units (not shown). The other video clip display unit is configured to display other video clips in response to performing a preset operation on the cover video clip, where the other video clips are other video clips of the material indicated by the operated cover video clip.
In some optional implementations, the video segment has a material identifier presented thereon, where the material identifier includes a material name.
In some optional implementations, the presenting unit 803 may be further configured to present video segments corresponding to each set of the target videos in the search result in the following manner: the video clips are arranged on the inner layer of the layer by using the layer stacking technology, and the material marks are arranged on the outer layer of the layer to generate a material diagram; and displaying the material images corresponding to the target videos in each group in the search results.
In some alternative implementations, the material search apparatus 800 may further include an extraction unit (not shown in the figure) and a connection unit (not shown in the figure). The extracting unit is used for extracting average color from the video clips as background color of the inner layer; the connection unit is used for connecting the inner layer and the outer layer in a gradual change color by utilizing the background color of the inner layer and the background color of the outer layer.
In some optional implementations, the material identifier includes a shooting identifier, where the shooting identifier is used to instruct shooting with the same type of material as the video clip; and the material search apparatus 800 may further include a jumping unit (not shown in the figure). The jump unit is used for responding to the shooting identification to trigger, jumping to a video shooting interface and shooting by using the same type of material of the triggered video clip.
In some alternative implementations, the video clips are generated by: inputting the target video into a pre-trained material detection model to obtain the starting and ending time of the target video using the material; and according to the starting and ending time, video clips using the materials are cut out from the target video.
In some optional implementations, the determining unit 801 may be further configured to determine the material hit by the search term by: acquiring keywords corresponding to materials in a preset material library; determining the matching degree between the keyword corresponding to each material and the search word according to each material in the material library; and selecting target materials from the material library based on the matching degree as the materials hit by the search word.
In some alternative implementations, the material searching apparatus 800 may further include a number determining unit (not shown in the figure), a scoring unit (not shown in the figure), and a selecting unit (not shown in the figure). The number determining unit is used for determining whether the number of the materials hit by the search word is larger than a preset number threshold; the scoring unit is configured to score the hit material based on material information of the material if the number of the hit materials of the search term is greater than a preset number threshold, where the material information includes at least one of: the contribution amount and the creation time corresponding to the materials; the selecting unit is used for selecting the threshold number of materials from the hit materials according to the order of the scores from high to low, and re-determining the selected materials as the materials hit by the search word.
In some alternative implementations, the material includes at least one of: special effects, templates, transitions, stickers, and hot spot themes.
Fig. 9 illustrates an exemplary system architecture 900 to which embodiments of the material search method of the present disclosure may be applied.
As shown in fig. 9, the system architecture 900 may include terminal devices 9011, 9012, 9013, a network 902, and a server 903. The network 902 serves as a medium for providing communication links between the terminal devices 9011, 9012, 9013 and the server 903. The network 902 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
The user may interact with the server 903 through the network 902 using the terminal devices 9011, 9012, 9013 to send or receive messages or the like, for example, the terminal devices 9011, 9012, 9013 may acquire multiple sets of videos corresponding to materials from the server 903. Various communication client applications, such as short video social software, video editing applications, instant messaging software, etc., can be installed on the terminal devices 9011, 9012, 9013.
The terminal devices 9011, 9012, 9013 may determine materials hit by the search term in response to receiving the search term; then, multiple groups of videos corresponding to the materials can be obtained from the server 903, the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and then, displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
The terminal devices 9011, 9012, 9013 may be hardware or software. When the terminal devices 9011, 9012, 9013 are hardware, they may be various electronic devices having a display screen and supporting information interaction, including but not limited to smartphones, tablets, laptop portable computers, and the like. When the terminal devices 9011, 9012, 9013 are software, they can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
The server 903 may be a server providing various services. For example, a background server providing videos corresponding to the materials to the terminal devices 9011, 9012, 9013 may be used.
The server 903 may be hardware or software. When the server 903 is hardware, it may be implemented as a distributed server cluster including a plurality of servers, or as a single server. When the server 903 is software, it may be implemented as a plurality of software or software modules (e.g., to provide distributed services), or as a single software or software module. The present invention is not particularly limited herein.
It should be further noted that, in the material searching method provided by the embodiment of the present disclosure, the terminal devices 9011, 9012, 9013 generally perform the material searching apparatus, and the material searching apparatus is generally disposed in the terminal devices 9011, 9012, 9013.
It should be understood that the number of terminal devices, networks and servers in fig. 9 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to fig. 10, a schematic diagram of a configuration of an electronic device (e.g., the terminal device of fig. 9) 1000 suitable for use in implementing embodiments of the present disclosure is shown. The terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), car terminals (e.g., car navigation terminals), and the like, and stationary terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 10 is merely an example, and should not impose any limitations on the functionality and scope of use of embodiments of the present disclosure.
As shown in fig. 10, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003. In the RAM 1003, various programs and data necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange data. While fig. 10 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead. Each block shown in fig. 10 may represent one device or a plurality of devices as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the methods of the embodiments of the present disclosure are performed when the computer program is executed by the processing device 1001. It should be noted that, the computer readable medium according to the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. Whereas in embodiments of the present disclosure, the computer-readable signal medium may comprise a data signal propagated in baseband or as part of a carrier wave, with computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: determining materials hit by the search word in response to receiving the search word; acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
Computer program code for carrying out operations of embodiments of the present disclosure may be written in one or more programming languages, including an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
According to one or more embodiments of the present disclosure, there is provided a material search method including: determining materials hit by the search word in response to receiving the search word; acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
According to one or more embodiments of the present disclosure, obtaining multiple sets of videos corresponding to the above materials includes: acquiring a plurality of videos using the materials; grouping the videos according to the using effect of the materials in the videos; for each set of videos, a target video is selected from the set of videos.
According to one or more embodiments of the present disclosure, grouping the plurality of videos described above includes: and inputting the videos into a pre-trained video grouping model to obtain a video grouping result.
According to one or more embodiments of the present disclosure, selecting a target video from the set of videos includes: selecting a preset number of videos from the group of videos as target videos based on video information of the group of videos, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material.
According to one or more embodiments of the present disclosure, displaying video segments corresponding to each set of the above-mentioned target videos in a search result includes: scoring video clips corresponding to each group of target videos based on video information of the video from which the video clips are derived, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material; sequencing the video clips according to the sequence from high score to low score to obtain a sequencing result; and displaying the video clips in the search results according to the sorting results.
According to one or more embodiments of the present disclosure, the search term hits at least two stories; and obtaining a plurality of groups of videos corresponding to the materials, and displaying video fragments corresponding to each group of target videos in a search result, wherein the video fragments comprise: for each material in the at least two materials, acquiring a plurality of groups of videos corresponding to the material, and selecting a target video segment from video segments corresponding to each group of target videos corresponding to the material as a cover video segment; and displaying the cover video clips corresponding to the materials in the search results.
In accordance with one or more embodiments of the present disclosure, after showing the cover video clips corresponding to the respective materials in the search results, the method further comprises: and responding to the preset operation on the cover video clips, and displaying other video clips, wherein the other video clips are other video clips of the material indicated by the operated cover video clips.
According to one or more embodiments of the present disclosure, the video clip has a material identifier presented thereon, wherein the material identifier includes a material name.
According to one or more embodiments of the present disclosure, displaying video segments corresponding to each set of the above-mentioned target videos in a search result includes: the video clips are arranged on the inner layer of the layer by using the layer stacking technology, and the material marks are arranged on the outer layer of the layer to generate a material diagram; and displaying the material images corresponding to the target videos in each group in the search results.
According to one or more embodiments of the present disclosure, after the disposing the video clip on the inner layer of the layer and the material mark on the outer layer of the layer, the method further includes: extracting an average color from the video clip as a background color of the inner layer; and connecting the inner layer and the outer layer with a gradient color by using the background color of the inner layer and the background color of the outer layer.
According to one or more embodiments of the present disclosure, the material identifier includes a photographing identifier, where the photographing identifier is used to instruct photographing using the same type of material as the video clip; and after displaying the video clips corresponding to each group of the target videos in the search results, the method further comprises: and responding to triggering the shooting identification, and jumping to a video shooting interface to shoot by using the same type of material of the triggered video clip.
According to one or more embodiments of the present disclosure, the video clips are generated by: inputting the target video into a pre-trained material detection model to obtain the starting and ending time of the target video using the material; and according to the starting and ending time, video clips using the materials are cut out from the target video.
According to one or more embodiments of the present disclosure, determining the material hit by the search term includes: acquiring keywords corresponding to materials in a preset material library; determining the matching degree between the keyword corresponding to each material and the search word according to each material in the material library; and selecting target materials from the material library based on the matching degree as the materials hit by the search word.
In accordance with one or more embodiments of the present disclosure, after determining the material hit by the search term, the method further comprises: determining whether the number of the materials hit by the search word is larger than a preset number threshold; if yes, scoring the hit material based on material information of the material, wherein the material information comprises at least one of the following: the contribution amount and the creation time corresponding to the materials; and selecting the threshold number of materials from the hit materials according to the order of the scores from high to low, and re-determining the selected materials as the materials hit by the search word.
According to one or more embodiments of the present disclosure, the material includes at least one of: special effects, templates, transitions, stickers, and hot spot themes.
According to one or more embodiments of the present disclosure, there is provided a material search apparatus including: a determining unit, configured to determine, in response to receiving a search word, a material hit by the search word; the system comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring a plurality of groups of videos corresponding to the materials, the use effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos; and the display unit is used for displaying video clips corresponding to each group of the target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
According to one or more embodiments of the present disclosure, the obtaining unit is further configured to obtain a plurality of sets of videos corresponding to the material by: acquiring a plurality of videos using the materials; grouping the videos according to the using effect of the materials in the videos; for each set of videos, a target video is selected from the set of videos.
According to one or more embodiments of the present disclosure, the acquiring unit is further configured to group the plurality of videos by: and inputting the videos into a pre-trained video grouping model to obtain a video grouping result.
According to one or more embodiments of the present disclosure, the above-mentioned obtaining unit is further configured to select the target video from the set of videos by: selecting a preset number of videos from the group of videos as target videos based on video information of the group of videos, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material.
According to one or more embodiments of the present disclosure, the presenting unit is further configured to present video segments corresponding to each set of the target videos in the search result by: scoring video clips corresponding to each group of target videos based on video information of the video from which the video clips are derived, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material; sequencing the video clips according to the sequence from high score to low score to obtain a sequencing result; and displaying the video clips in the search results according to the sorting results.
According to one or more embodiments of the present disclosure, the search term hits at least two stories; the obtaining unit and the displaying unit are further configured to obtain a plurality of groups of videos corresponding to the materials, and display video segments corresponding to each group of the target videos in a search result, where the video segments correspond to the target videos in the search result in the following manner: for each material in the at least two materials, acquiring a plurality of groups of videos corresponding to the material, and selecting a target video segment from video segments corresponding to each group of target videos corresponding to the material as a cover video segment; and displaying the cover video clips corresponding to the materials in the search results.
According to one or more embodiments of the present disclosure, the material search apparatus further includes: and the other video clip display unit is used for responding to the preset operation on the cover video clip and displaying other video clips, wherein the other video clips are other video clips of the material indicated by the operated cover video clip.
According to one or more embodiments of the present disclosure, the video clip has a material identifier presented thereon, wherein the material identifier includes a material name.
According to one or more embodiments of the present disclosure, the presenting unit is further configured to present video segments corresponding to each set of the target videos in the search result by: the video clips are arranged on the inner layer of the layer by using the layer stacking technology, and the material marks are arranged on the outer layer of the layer to generate a material diagram; and displaying the material images corresponding to the target videos in each group in the search results.
According to one or more embodiments of the present disclosure, the material search apparatus further includes: an extracting unit for extracting an average color from the video clip as a background color of the inner layer; and a connection unit for connecting the inner layer and the outer layer with a gradient color by using the background color of the inner layer and the background color of the outer layer.
According to one or more embodiments of the present disclosure, the material identifier includes a photographing identifier, where the photographing identifier is used to instruct photographing using the same type of material as the video clip; the material search apparatus further includes: and the jump unit is used for responding to the triggering of the shooting identification and jumping to a video shooting interface so as to shoot by using the same type of materials of the triggered video clips.
According to one or more embodiments of the present disclosure, the video clips are generated by: inputting the target video into a pre-trained material detection model to obtain the starting and ending time of the target video using the material; and according to the starting and ending time, video clips using the materials are cut out from the target video.
According to one or more embodiments of the present disclosure, the determining unit is further configured to determine the material hit by the search term by: acquiring keywords corresponding to materials in a preset material library; determining the matching degree between the keyword corresponding to each material and the search word according to each material in the material library; and selecting target materials from the material library based on the matching degree as the materials hit by the search word.
According to one or more embodiments of the present disclosure, the material search apparatus further includes: a number determining unit, configured to determine whether the number of the materials hit by the search term is greater than a preset number threshold; the scoring unit is configured to score the hit materials based on the material information of the materials if the number of the hit materials of the search term is greater than a preset number threshold, where the material information includes at least one of the following items: the contribution amount and the creation time corresponding to the materials; and the selecting unit is used for selecting the threshold number of materials from the hit materials according to the order of the scores from high to low, and re-determining the selected materials as the materials hit by the search word.
According to one or more embodiments of the present disclosure, the material includes at least one of: special effects, templates, transitions, stickers, and hot spot themes.
The units involved in the embodiments described in the present disclosure may be implemented by means of software, or may be implemented by means of hardware. The described units may also be provided in a processor, for example, described as: a processor includes a determination unit, an acquisition unit, and a presentation unit. The names of these units do not limit the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires a plurality of sets of videos corresponding to a material".
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above technical features, but encompasses other technical features formed by any combination of the above technical features or their equivalents without departing from the spirit of the invention. Such as the above-described features, are mutually substituted with (but not limited to) the features having similar functions disclosed in the embodiments of the present disclosure.

Claims (18)

1. A material search method, comprising:
determining materials hit by the search word in response to receiving the search word;
acquiring a plurality of groups of videos corresponding to the materials, wherein the using effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos;
and displaying video clips corresponding to each group of target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
2. The method of claim 1, wherein the obtaining the plurality of sets of videos corresponding to the material comprises:
Acquiring a plurality of videos using the material;
grouping the videos according to the using effect of the materials in the videos;
for each set of videos, a target video is selected from the set of videos.
3. The method of claim 2, wherein the grouping the plurality of videos comprises:
and inputting the videos into a pre-trained video grouping model to obtain a video grouping result.
4. The method of claim 2, wherein selecting the target video from the set of videos comprises:
selecting a preset number of videos from the group of videos as target videos based on video information of the group of videos, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, which is a ratio of the number of people searching for or using the material after viewing the video to the number of people viewing the video, and a video quality of a video clip using the material.
5. The method of claim 1, wherein the presenting video segments corresponding to each set of the target videos in the search results comprises:
scoring video clips corresponding to each group of target videos based on video information of videos from which the video clips are derived, wherein the video information comprises at least one of the following: comment information, a material usage conversion rate, and a video quality of a video clip using the material, the material usage conversion rate being a ratio of a number of people searching for or using the material after viewing a video to a number of people viewing the video;
Sequencing the video clips according to the sequence from high score to low score to obtain a sequencing result;
and displaying the video clips in the search results according to the sorting results.
6. The method of claim 1, wherein the search term hits at least two stories; and
the obtaining the multiple groups of videos corresponding to the materials, and displaying the video segments corresponding to the target videos in the search results comprises the following steps:
for each material in the at least two materials, acquiring a plurality of groups of videos corresponding to the material, and selecting a target video segment from video segments corresponding to each group of target videos corresponding to the material as a cover video segment;
and displaying the cover video clips corresponding to the materials in the search results.
7. The method of claim 6, wherein after the showing of the cover video clips corresponding to the respective stories in the search results, the method further comprises:
and responding to the preset operation on the cover video clips, and displaying other video clips, wherein the other video clips are other video clips of the material indicated by the operated cover video clips.
8. The method of claim 1, wherein the video clip has a material identification presented thereon, wherein the material identification comprises a material name.
9. The method of claim 8, wherein the presenting video segments corresponding to each set of the target videos in the search results comprises:
setting the video clips on an inner layer of a layer by using a layer stacking technology, and setting the material marks on an outer layer of the layer to generate a material map;
and displaying the material images corresponding to the target videos in each group in the search results.
10. The method of claim 9, wherein after said disposing said video clip on an inner layer of a layer and said material identification on an outer layer of said layer, said method further comprises:
extracting an average color from the video segment as a background color of the inner layer;
and connecting the inner layer and the outer layer with gradient colors by utilizing the background color of the inner layer and the background color of the outer layer.
11. The method of claim 8, wherein the material identification comprises a shot identification indicating shooting with the same type of material as the video clip; and
After the video segments corresponding to each set of the target videos are shown in the search results, the method further comprises:
and responding to triggering the shooting identification, and jumping to a video shooting interface to shoot by using the same type of material as the triggered video clip.
12. The method of claim 1, wherein the video clip is generated by:
inputting the target video into a pre-trained material detection model to obtain the starting and ending time of the target video using the material;
and according to the start-stop time, video clips using the materials are intercepted from the target video.
13. The method of claim 1, wherein the determining the material that the search term hits comprises:
acquiring keywords corresponding to materials in a preset material library;
determining the matching degree between the keyword corresponding to each material and the search word according to each material in the material library;
and selecting target materials from the material library based on the matching degree to serve as the materials hit by the search word.
14. The method of claim 1, wherein after said determining the material that the search term hits, the method further comprises:
Determining whether the number of the materials hit by the search word is larger than a preset number threshold;
if yes, scoring the hit material based on material information of the material, wherein the material information comprises at least one of the following: the contribution amount and the creation time corresponding to the materials;
and selecting the threshold number of materials from the hit materials according to the order of the scores from high to low, and re-determining the selected materials as the materials hit by the search word.
15. The method of any of claims 1-14, wherein the material comprises at least one of: special effects, templates, transitions, stickers, and hot spot themes.
16. A material search apparatus, comprising:
a determining unit, configured to determine, in response to receiving a search word, a material hit by the search word;
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a plurality of groups of videos corresponding to the materials, the use effect of the materials in each group of videos is the same, and each group of videos comprises one or more target videos;
the display unit is used for displaying video clips corresponding to each group of target videos in the search result, wherein the video clips are used for representing the using effect of the materials.
17. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-15.
18. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-15.
CN202310115488.1A 2023-02-07 2023-02-07 Material searching method and device and electronic equipment Pending CN116049490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310115488.1A CN116049490A (en) 2023-02-07 2023-02-07 Material searching method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310115488.1A CN116049490A (en) 2023-02-07 2023-02-07 Material searching method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN116049490A true CN116049490A (en) 2023-05-02

Family

ID=86129508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310115488.1A Pending CN116049490A (en) 2023-02-07 2023-02-07 Material searching method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN116049490A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304163A (en) * 2023-05-11 2023-06-23 深圳兔展智能科技有限公司 Image retrieval method, device, computer equipment and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116304163A (en) * 2023-05-11 2023-06-23 深圳兔展智能科技有限公司 Image retrieval method, device, computer equipment and medium
CN116304163B (en) * 2023-05-11 2023-07-25 深圳兔展智能科技有限公司 Image retrieval method, device, computer equipment and medium

Similar Documents

Publication Publication Date Title
EP3855753B1 (en) Method and apparatus for locating video playing node, device and storage medium
KR101944469B1 (en) Estimating and displaying social interest in time-based media
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN107846561B (en) Method and system for determining and displaying contextually targeted content
CN110837579A (en) Video classification method, device, computer and readable storage medium
CN111246275A (en) Comment information display and interaction method and device, electronic equipment and storage medium
CN111279709B (en) Providing video recommendations
JP7394809B2 (en) Methods, devices, electronic devices, media and computer programs for processing video
CN109408672B (en) Article generation method, article generation device, server and storage medium
US20170235828A1 (en) Text Digest Generation For Searching Multiple Video Streams
US10769196B2 (en) Method and apparatus for displaying electronic photo, and mobile device
CN112672208B (en) Video playing method, device, electronic equipment, server and system
CN110347866B (en) Information processing method, information processing device, storage medium and electronic equipment
WO2023051294A9 (en) Prop processing method and apparatus, and device and medium
CN112291614A (en) Video generation method and device
CN112287168A (en) Method and apparatus for generating video
CN110309324B (en) Searching method and related device
CN116049490A (en) Material searching method and device and electronic equipment
CN115379136A (en) Special effect prop processing method and device, electronic equipment and storage medium
CN114880458A (en) Book recommendation information generation method, device, equipment and medium
CN111382367B (en) Search result ordering method and device
CN112165626A (en) Image processing method, resource acquisition method, related device and medium
CN114697762B (en) Processing method, processing device, terminal equipment and medium
TWI780333B (en) Method for dynamically processing and playing multimedia files and multimedia play apparatus
CN115547330A (en) Information display method and device based on voice interaction and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination